Probability theory. Probability of an event, random events (probability theory)

as an ontological category reflects the extent of the possibility of the emergence of any entity under any conditions. In contrast to the mathematical and logical interpretation of this concept, ontological mathematics does not associate itself with the obligation of quantitative expression. The meaning of V. is revealed in the context of understanding determinism and the nature of development in general.

Great definition

Incomplete definition ↓

PROBABILITY

concept characterizing quantities. the measure of the possibility of the occurrence of a certain event at a certain conditions. In scientific knowledge there are three interpretations of V. The classical concept of V., which arose from mathematical. analysis of gambling and most fully developed by B. Pascal, J. Bernoulli and P. Laplace, considers winning as the ratio of the number of favorable cases to the total number of all equally possible ones. For example, when throwing a dice that has 6 sides, each of them can be expected to land with a value of 1/6, since no one side has advantages over another. Such symmetry of experimental outcomes is specially taken into account when organizing games, but is relatively rare in the study of objective events in science and practice. Classic V.'s interpretation gave way to statistics. V.'s concepts, which are based on the actual observing the occurrence of a certain event over a long period of time. experience under precisely fixed conditions. Practice confirms that the more often an event occurs, the greater the degree of objective possibility of its occurrence, or B. Therefore, statistical. V.'s interpretation is based on the concept of relates. frequency, which can be determined experimentally. V. as a theoretical the concept never coincides with the empirically determined frequency, however, in plural. In cases, it differs practically little from the relative one. frequency found as a result of duration. observations. Many statisticians consider V. as a “double” refers. frequencies, edges are determined statistically. study of observational results

or experiments. Less realistic was the definition of V. as the limit relates. frequencies of mass events, or groups, proposed by R. Mises. As a further development of the frequency approach to V., a dispositional, or propensitive, interpretation of V. is put forward (K. Popper, J. Hacking, M. Bunge, T. Settle). According to this interpretation, V. characterizes the property of generating conditions, for example. experiment. installations to obtain a sequence of massive random events. It is precisely this attitude that gives rise to physical dispositions, or predispositions, V. which can be checked using relatives. frequency

Statistical V.'s interpretation dominates scientific research. cognition, because it reflects specific. the nature of the patterns inherent in mass phenomena of a random nature. In many physical, biological, economic, demographic. and other social processes, it is necessary to take into account the action of many random factors, which are characterized by a stable frequency. Identifying these stable frequencies and quantities. its assessment with the help of V. makes it possible to reveal the necessity that makes its way through the cumulative action of many accidents. This is where the dialectic of transforming chance into necessity finds its manifestation (see F. Engels, in the book: K. Marx and F. Engels, Works, vol. 20, pp. 535-36).

Logical, or inductive, reasoning characterizes the relationship between the premises and the conclusion of non-demonstrative and, in particular, inductive reasoning. Unlike deduction, the premises of induction do not guarantee the truth of the conclusion, but only make it more or less plausible. This plausibility, with precisely formulated premises, can sometimes be assessed using V. The value of this V. is most often determined by comparison. concepts (greater than, less than or equal to), and sometimes in a numerical way. Logical interpretation is often used to analyze inductive reasoning and construct various systems of probabilistic logic (R. Carnap, R. Jeffrey). In semantics logical concepts V. is often defined as the degree to which one statement is confirmed by others (for example, a hypothesis by its empirical data).

In connection with the development of theories of decision making and games, the so-called personalistic interpretation of V. Although V. at the same time expresses the degree of faith of the subject and the occurrence of a certain event, V. themselves must be chosen in such a way that the axioms of the calculus of V. are satisfied. Therefore, V. with such an interpretation expresses not so much the degree of subjective, but rather reasonable faith . Consequently, decisions made on the basis of such V. will be rational, because they do not take into account the psychological. characteristics and inclinations of the subject.

With epistemological t.zr. difference between statistical, logical. and personalistic interpretations of V. is that if the first characterizes the objective properties and relationships of mass phenomena of a random nature, then the last two analyze the features of the subjective, cognizant. human activities under conditions of uncertainty.

PROBABILITY

one of the most important concepts of science, characterizing a special systemic vision of the world, its structure, evolution and knowledge. The specificity of the probabilistic view of the world is revealed through the inclusion of the concepts of randomness, independence and hierarchy (the idea of ​​levels in the structure and determination of systems) among the basic concepts of existence.

Ideas about probability originated in ancient times and related to the characteristics of our knowledge, while the existence of probabilistic knowledge was recognized, which differed from reliable knowledge and from false knowledge. The impact of the idea of ​​probability on scientific thinking and on the development of knowledge is directly related to the development of probability theory as a mathematical discipline. The origin of the mathematical doctrine of probability dates back to the 17th century, when the development of a core of concepts allowing. quantitative (numerical) characteristics and expressing a probabilistic idea.

Intensive applications of probability to the development of cognition occur in the 2nd half. 19 - 1st half. 20th century Probability has entered the structures of such fundamental sciences of nature as classical statistical physics, genetics, quantum theory, and cybernetics (information theory). Accordingly, probability personifies that stage in the development of science, which is now defined as non-classical science. To reveal the novelty and features of the probabilistic way of thinking, it is necessary to proceed from an analysis of the subject of probability theory and the foundations of its numerous applications. Probability theory is usually defined as a mathematical discipline that studies the patterns of mass random phenomena under certain conditions. Randomness means that within the framework of mass character, the existence of each elementary phenomenon does not depend on and is not determined by the existence of other phenomena. At the same time, the mass nature of phenomena itself has a stable structure and contains certain regularities. A mass phenomenon is quite strictly divided into subsystems, and the relative number of elementary phenomena in each of the subsystems (relative frequency) is very stable. This stability is compared with probability. A mass phenomenon as a whole is characterized by a probability distribution, that is, by specifying subsystems and their corresponding probabilities. The language of probability theory is the language of probability distributions. Accordingly, probability theory is defined as the abstract science of operating with distributions.

Probability gave rise in science to ideas about statistical patterns and statistical systems. The latter are systems formed from independent or quasi-independent entities; their structure is characterized by probability distributions. But how is it possible to form systems from independent entities? It is usually assumed that for the formation of systems with integral characteristics, it is necessary that sufficiently stable connections exist between their elements that cement the systems. Stability of statistical systems is given by the presence of external conditions, external environment, external rather than internal forces. The very definition of probability is always based on setting the conditions for the formation of the initial mass phenomenon. Another important idea characterizing the probabilistic paradigm is the idea of ​​hierarchy (subordination). This idea expresses the relationship between the characteristics of individual elements and the integral characteristics of systems: the latter, as it were, are built on top of the former.

The importance of probabilistic methods in cognition lies in the fact that they make it possible to study and theoretically express the patterns of structure and behavior of objects and systems that have a hierarchical, “two-level” structure.

Analysis of the nature of probability is based on its frequency, statistical interpretation. At the same time, for a very long time, such an understanding of probability dominated in science, which was called logical, or inductive, probability. Logical probability is interested in questions of the validity of a separate, individual judgment under certain conditions. Is it possible to evaluate the degree of confirmation (reliability, truth) of an inductive conclusion (hypothetical conclusion) in quantitative form? During the development of probability theory, such questions were repeatedly discussed, and they began to talk about the degrees of confirmation of hypothetical conclusions. This measure of probability is determined by the information available to a given person, his experience, views on the world and psychological mindset. In all such cases, the magnitude of probability is not amenable to strict measurements and practically lies outside the competence of probability theory as a consistent mathematical discipline.

The objective, frequentist interpretation of probability was established in science with significant difficulties. Initially, the understanding of the nature of probability was strongly influenced by those philosophical and methodological views that were characteristic of classical science. Historically, the development of probabilistic methods in physics occurred under the determining influence of the ideas of mechanics: statistical systems were interpreted simply as mechanical. Since the corresponding problems were not solved by strict methods of mechanics, assertions arose that turning to probabilistic methods and statistical laws is the result of the incompleteness of our knowledge. In the history of the development of classical statistical physics, numerous attempts were made to substantiate it on the basis of classical mechanics, but they all failed. The basis of probability is that it expresses the structural features of a certain class of systems, other than mechanical systems: the state of the elements of these systems is characterized by instability and a special (not reducible to mechanics) nature of interactions.

The entry of probability into knowledge leads to the denial of the concept of hard determinism, to the denial of the basic model of being and knowledge developed in the process of the formation of classical science. The basic models represented by statistical theories are of a different, more general nature: they include the ideas of randomness and independence. The idea of ​​probability is associated with the disclosure of the internal dynamics of objects and systems, which cannot be entirely determined by external conditions and circumstances.

The concept of a probabilistic vision of the world, based on the absolutization of ideas about independence (as before the paradigm of rigid determination), has now revealed its limitations, which is most strongly reflected in the transition of modern science to analytical methods for studying complex systems and the physical and mathematical foundations of self-organization phenomena.

Great definition

Incomplete definition ↓

Brief theory

To quantitatively compare events according to the degree of possibility of their occurrence, a numerical measure is introduced, which is called the probability of an event. The probability of a random event is a number that expresses the measure of the objective possibility of an event occurring.

The quantities that determine how significant the objective reasons are to expect the occurrence of an event are characterized by the probability of the event. It must be emphasized that probability is an objective quantity that exists independently of the knower and is conditioned by the entire set of conditions that contribute to the occurrence of an event.

The explanations we have given for the concept of probability are not a mathematical definition, since they do not quantify the concept. There are several definitions of the probability of a random event, which are widely used in solving specific problems (classical, axiomatic, statistical, etc.).

Classic definition of event probability reduces this concept to the more elementary concept of equally possible events, which is no longer subject to definition and is assumed to be intuitively clear. For example, if a die is a homogeneous cube, then the loss of any of the faces of this cube will be equally possible events.

Let a reliable event be divided into equally possible cases, the sum of which gives the event. That is, the cases into which it breaks down are called favorable for the event, since the appearance of one of them ensures the occurrence.

The probability of an event will be denoted by the symbol.

The probability of an event is equal to the ratio of the number of cases favorable to it, out of the total number of uniquely possible, equally possible and incompatible cases, to the number, i.e.

This is the classic definition of probability. Thus, to find the probability of an event, it is necessary, having considered the various outcomes of the test, to find a set of uniquely possible, equally possible and incompatible cases, calculate their total number n, the number of cases m favorable for a given event, and then perform the calculation using the above formula.

The probability of an event equal to the ratio of the number of experimental outcomes favorable to the event to the total number of experimental outcomes is called classical probability random event.

The following properties of probability follow from the definition:

Property 1. The probability of a reliable event is equal to one.

Property 2. The probability of an impossible event is zero.

Property 3. The probability of a random event is a positive number between zero and one.

Property 4. The probability of the occurrence of events that form a complete group is equal to one.

Property 5. The probability of the occurrence of the opposite event is determined in the same way as the probability of the occurrence of event A.

The number of cases favoring the occurrence of an opposite event. Hence the probability of the occurrence of the opposite event is equal to the difference between unity and the probability of the occurrence of event A:

An important advantage of the classical definition of the probability of an event is that with its help the probability of an event can be determined without resorting to experience, but based on logical reasoning.

When a set of conditions is met, a reliable event will definitely happen, but an impossible event will definitely not happen. Among the events that may or may not occur when a set of conditions is created, the occurrence of some can be counted on with good reason, and the occurrence of others with less reason. If, for example, there are more white balls in an urn than black balls, then there is more reason to hope for the appearance of a white ball when drawn from the urn at random than for the appearance of a black ball.

Example of problem solution

Example 1

A box contains 8 white, 4 black and 7 red balls. 3 balls are drawn at random. Find the probabilities of the following events: – at least 1 red ball is drawn, – there are at least 2 balls of the same color, – there are at least 1 red and 1 white ball.

The solution of the problem

We find the total number of test outcomes as the number of combinations of 19 (8+4+7) elements of 3:

Let's find the probability of the event– at least 1 red ball is drawn (1,2 or 3 red balls)

Required probability:

Let the event– there are at least 2 balls of the same color (2 or 3 white balls, 2 or 3 black balls and 2 or 3 red balls)

Number of outcomes favorable to the event:

Required probability:

Let the event– there is at least one red and 1 white ball

(1 red, 1 white, 1 black or 1 red, 2 white or 2 red, 1 white)

Number of outcomes favorable to the event:

Required probability:

Answer: P(A)=0.773;P(C)=0.7688; P(D)=0.6068

Example 2

Two dice are thrown. Find the probability that the sum of points is at least 5.

Solution

Let the event be a score of at least 5

Let's use the classic definition of probability:

Total number of possible test outcomes

Number of trials favoring the event of interest

On the dropped side of the first dice, one point, two points..., six points may appear. similarly, six outcomes are possible when rolling the second die. Each of the outcomes of throwing the first die can be combined with each of the outcomes of the second. Thus, the total number of possible elementary test outcomes is equal to the number of placements with repetitions (choice with placements of 2 elements from a set of volume 6):

Let's find the probability of the opposite event - the sum of points is less than 5

The following combinations of dropped points will favor the event:

1st bone 2nd bone 1 1 1 2 1 2 3 2 1 4 3 1 5 1 3


The geometric definition of probability is presented and the solution to the well-known meeting problem is given.

Let's not think about the lofty things for a long time - let's start right away with the definition.

Bernoulli's scheme is when n identical independent experiments are performed, in each of which the event of interest to us may appear A, and the probability of this event P (A) = p is known. We need to determine the probability that, after n trials, event A will occur exactly k times.

The problems that can be solved using Bernoulli's scheme are extremely varied: from simple ones (such as “find the probability that the shooter will hit 1 time out of 10”) to very severe ones (for example, problems with percentages or playing cards). In reality, this scheme is often used to solve problems related to monitoring the quality of products and the reliability of various mechanisms, all the characteristics of which must be known before starting work.

Let's return to the definition. Since we are talking about independent trials, and in each trial the probability of event A is the same, only two outcomes are possible:

  1. A is the occurrence of event A with probability p;
  2. “not A” - event A did not appear, which happens with probability q = 1 − p.

The most important condition, without which Bernoulli’s scheme loses its meaning, is constancy. No matter how many experiments we conduct, we are interested in the same event A, which occurs with the same probability p.

By the way, not all problems in probability theory are reduced to constant conditions. Any competent higher mathematics tutor will tell you about this. Even something as simple as taking colored balls out of a box is not an experience with constant conditions. They took out another ball - the ratio of colors in the box changed. Consequently, the probabilities have changed.

If the conditions are constant, we can accurately determine the probability that event A will occur exactly k times out of n possible. Let us formulate this fact in the form of a theorem:

Bernoulli's theorem. Let the probability of occurrence of event A in each experiment be constant and equal to p. Then the probability that event A will appear exactly k times in n independent trials is calculated by the formula:

where C n k is the number of combinations, q = 1 − p.

This formula is called Bernoulli's formula. It is interesting to note that the problems given below can be completely solved without using this formula. For example, you can apply the formulas for adding probabilities. However, the amount of computation will be simply unrealistic.

Task. The probability of producing a defective product on a machine is 0.2. Determine the probability that in a batch of ten parts produced on this machine exactly k parts will be without defects. Solve the problem for k = 0, 1, 10.

According to the condition, we are interested in the event A of releasing products without defects, which happens each time with probability p = 1 − 0.2 = 0.8. We need to determine the probability that this event will occur k times. Event A is contrasted with the event “not A”, i.e. release of a defective product.

Thus, we have: n = 10; p = 0.8; q = 0.2.

So, we find the probability that all the parts in a batch are defective (k = 0), that there is only one part without defects (k = 1), and that there are no defective parts at all (k = 10):

Task. The coin is tossed 6 times. Landing a coat of arms and heads is equally likely. Find the probability that:

  1. the coat of arms will appear three times;
  2. the coat of arms will appear once;
  3. the coat of arms will appear at least twice.

So, we are interested in the event A, when the coat of arms falls out. The probability of this event is p = 0.5. Event A is contrasted with the event “not A”, when the result is heads, which happens with probability q = 1 − 0.5 = 0.5. We need to determine the probability that the coat of arms will appear k times.

Thus, we have: n = 6; p = 0.5; q = 0.5.

Let us determine the probability that the coat of arms is drawn three times, i.e. k = 3:

Now let’s determine the probability that the coat of arms came up only once, i.e. k = 1:

It remains to determine with what probability the coat of arms will appear at least twice. The main catch is in the phrase “no less.” It turns out that we will be satisfied with any k except 0 and 1, i.e. we need to find the value of the sum X = P 6 (2) + P 6 (3) + ... + P 6 (6).

Note that this sum is also equal to (1 − P 6 (0) − P 6 (1)), i.e. From all possible options, it is enough to “cut out” those when the coat of arms fell out 1 time (k = 1) or did not appear at all (k = 0). Since we already know P 6 (1), it remains to find P 6 (0):

Task. The probability that the TV has hidden defects is 0.2. 20 TVs arrived at the warehouse. Which event is more likely: that in this batch there are two TV sets with hidden defects or three?

Event of interest A is the presence of a latent defect. There are n = 20 TVs in total, the probability of a hidden defect is p = 0.2. Accordingly, the probability of receiving a TV without a hidden defect is q = 1 − 0.2 = 0.8.

We obtain the starting conditions for the Bernoulli scheme: n = 20; p = 0.2; q = 0.8.

Let’s find the probability of getting two “defective” TVs (k = 2) and three (k = 3):

\[\begin(array)(l)(P_(20))\left(2 \right) = C_(20)^2(p^2)(q^(18)) = \frac((20}{{2!18!}} \cdot {0,2^2} \cdot {0,8^{18}} \approx 0,137\\{P_{20}}\left(3 \right) = C_{20}^3{p^3}{q^{17}} = \frac{{20!}}{{3!17!}} \cdot {0,2^3} \cdot {0,8^{17}} \approx 0,41\end{array}\]!}

Obviously, P 20 (3) > P 20 (2), i.e. the probability of receiving three televisions with hidden defects is greater than the probability of receiving only two such televisions. Moreover, the difference is not weak.

A quick note about factorials. Many people experience a vague feeling of discomfort when they see the entry “0!” (read “zero factorial”). So, 0! = 1 by definition.

P. S. And the biggest probability in the last task is to get four TVs with hidden defects. Calculate for yourself and see for yourself.

In economics, as in other areas of human activity or in nature, we constantly have to deal with events that cannot be accurately predicted. Thus, the sales volume of a product depends on demand, which can vary significantly, and on a number of other factors that are almost impossible to take into account. Therefore, when organizing production and carrying out sales, you have to predict the outcome of such activities on the basis of either your own previous experience, or similar experience of other people, or intuition, which to a large extent also relies on experimental data.

In order to somehow evaluate the event in question, it is necessary to take into account or specially organize the conditions in which this event is recorded.

The implementation of certain conditions or actions to identify the event in question is called experience or experiment.

The event is called random, if as a result of experience it may or may not occur.

The event is called reliable, if it necessarily appears as a result of a given experience, and impossible, if it cannot appear in this experience.

For example, snowfall in Moscow on November 30 is a random event. The daily sunrise can be considered a reliable event. Snowfall at the equator can be considered an impossible event.

One of the main tasks in probability theory is the task of determining a quantitative measure of the possibility of an event occurring.

Algebra of events

Events are called incompatible if they cannot be observed together in the same experience. Thus, the presence of two and three cars in one store for sale at the same time are two incompatible events.

Amount events is an event consisting of the occurrence of at least one of these events

An example of the sum of events is the presence of at least one of two products in the store.

The work events is an event consisting of the simultaneous occurrence of all these events

An event consisting of the appearance of two goods in a store at the same time is a product of events: - the appearance of one product, - the appearance of another product.

Events form a complete group of events if at least one of them is sure to occur in experience.

Example. The port has two berths for receiving ships. Three events can be considered: - the absence of ships at the berths, - the presence of one ship at one of the berths, - the presence of two ships at two berths. These three events form a complete group of events.

Opposite two unique possible events that form a complete group are called.

If one of the events that is opposite is denoted by , then the opposite event is usually denoted by .

Classical and statistical definitions of event probability

Each of the equally possible results of tests (experiments) is called an elementary outcome. They are usually designated by letters. For example, a die is thrown. There can be a total of six elementary outcomes based on the number of points on the sides.

From elementary outcomes you can create a more complex event. Thus, the event of an even number of points is determined by three outcomes: 2, 4, 6.

A quantitative measure of the possibility of the occurrence of the event in question is probability.

The most widely used definitions of the probability of an event are: classic And statistical.

The classical definition of probability is associated with the concept of a favorable outcome.

The outcome is called favorable to a given event if its occurrence entails the occurrence of this event.

In the above example, the event in question—an even number of points on the rolled side—has three favorable outcomes. In this case, the general
number of possible outcomes. This means that the classical definition of the probability of an event can be used here.

Classic definition equals the ratio of the number of favorable outcomes to the total number of possible outcomes

where is the probability of the event, is the number of outcomes favorable to the event, is the total number of possible outcomes.

In the considered example

The statistical definition of probability is associated with the concept of the relative frequency of occurrence of an event in experiments.

The relative frequency of occurrence of an event is calculated using the formula

where is the number of occurrences of an event in a series of experiments (tests).

Statistical definition. The probability of an event is the number around which the relative frequency stabilizes (sets) with an unlimited increase in the number of experiments.

In practical problems, the probability of an event is taken to be the relative frequency for a sufficiently large number of trials.

From these definitions of the probability of an event it is clear that the inequality is always satisfied

To determine the probability of an event based on formula (1.1), combinatorics formulas are often used to find the number of favorable outcomes and the total number of possible outcomes.

If the events H 1, H 2, ..., H n form a complete group, then to calculate the probability of an arbitrary event you can use the total probability formula:

P(A) = P(A/H 1) P(H 1)+P(A/H 2) P(H 2)

According to which the probability of the occurrence of event A can be represented as the sum of the products of the conditional probabilities of event A, subject to the occurrence of events H i, by the unconditional probabilities of these events H i. These events H i are called hypotheses.

From the total probability formula follows Bayes' formula:

The probabilities P(H i) of the hypotheses H i are called a priori probabilities - probabilities before conducting experiments.
Probabilities P(A/H i) are called posterior probabilities - the probabilities of hypotheses H i, refined as a result of experience.

Purpose of the service. The online calculator is designed to calculate the total probability with the entire solution process written in Word format (see examples of problem solving).

Number of objects 2 3 4 5
Number of products specified Probabilities of defective products are specified
Plant No. 1: P(H1) = . Probability of standard products: P(A|H1) =
Plant No. 2: P(H2) = . Probability of standard products: P(A|H2) =
Plant No. 3: P(H3) = . Probability of standard products: P(A|H3) =
Plant No. 4: P(H4) = . Probability of standard products: P(A|H4) =
Plant No. 5: P(H5) = . Probability of standard products: P(A|H5) =

If the source data is presented as a percentage (%), then it must be presented as a share. For example, 60%: 0.6.

Example No. 1. The store receives light bulbs from two factories, with the first factory's share being 25%. It is known that the percentage of defects at these factories is equal to 5% and 10% of all manufactured products, respectively. The seller takes one light bulb at random. What is the probability that it will be defective?
Solution: Let us denote by A the event - “the light bulb turns out to be defective.” The following hypotheses about the origin of this light bulb are possible: H 1- “the light bulb came from the first factory.” H 2- “the light bulb came from the second plant.” Since the share of the first plant is 25%, the probabilities of these hypotheses are equal, respectively ; .
The conditional probability that a defective light bulb was produced by the first plant is , the second plant - p(A/H 2)=we find the required probability that the seller took a defective light bulb using the total probability formula
0.25·0.05+0.75·0.10=0.0125+0.075=0.0875
Answer: p(A)= 0,0875.

Example No. 2. The store received two equal quantities of the product of the same name. It is known that 25% of the first batch and 40% of the second batch are first-class goods. What is the probability that a randomly selected unit of goods will not be of first grade?
Solution:
Let us denote by A the event - “the product will be first class.” The following hypotheses about the origin of this product are possible: H 1- “product from the first batch”. H 2- “product from the second batch.” Since the share of the first batch is 25%, the probabilities of these hypotheses are equal, respectively ; .
The conditional probability that the product from the first batch is , from the second batch - the desired probability that a randomly selected unit of goods will be first class
p(A) = P(H 1) p(A/H 1)+P(H 2) (A/H 2)= 0.25·0.5+0.4·0.5=0.125+0.2=0.325
Then, the probability that a randomly selected unit of goods will not be of the first grade will be equal to: 1- 0.325 = 0.675
Answer: .

Example No. 3. It is known that 5% of men and 1% of women are color blind. The person chosen at random turned out to be not colorblind. What is the probability that this is a man (assume that there are equal numbers of men and women).
Solution.
Event A - the person chosen at random turns out to be not colorblind.
Let's find the probability of this event occurring.
P(A) = P(A|H=male) + P(A|H=female) = 0.95*0.5 + 0.99*0.5 = 0.475 + 0.495 = 0.97
Then the probability that this is a man is: p = P(A|H=man) / P(A) = 0.475/0.97 = 0.4897

Example No. 4. 4 first-year students, 6 second-year students, and 5 third-year students take part in the sports Olympiad. The probabilities that a first-, second-, third-year student will win the Olympiad are respectively 0.9; 0.7 and 0.8.
a) Find the probability of winning by a randomly selected participant.
b) Under the conditions of this problem, one student won the Olympiad. Which group does he most likely belong to?
Solution.
Event A - victory of a randomly selected participant.
Here P(H1) = 4/(4+6+5) = 0.267, P(H2) = 6/(4+6+5) = 0.4, P(H3) = 5/(4+6+5) = 0.333,
P(A|H1) = 0.9, P(A|H2) = 0.7, P(A|H3) = 0.8
a) P(A) = P(H1)*P(A|H1) + P(H2)*P(A|H2) + P(H3)*P(A|H3) = 0.267*0.9 + 0.4*0.7 + 0.333*0.8 = 0.787
b) The solution can be obtained using this calculator.
p1 = P(H1)*P(A|H1)/P(A)
p2 = P(H2)*P(A|H2)/P(A)
p3 = P(H3)*P(A|H3)/P(A)
From p1, p2, p3, choose the maximum one.

Example No. 5. The company has three machines of the same type. One of them provides 20% of total production, the second – 30%, the third – 50%. In this case, the first machine produces 5% of defects, the second 4%, the third – 2%. Find the probability that a randomly selected defective product is produced by the first machine.

Did you like the article? Share with your friends!