Classic definition of the probability of a random event. Random events and their probabilities

ChapterI. RANDOM EVENTS. PROBABILITY

1.1. Regularity and randomness, random variability in the exact sciences, biology and medicine

Probability theory is a branch of mathematics that studies patterns in random phenomena. A random phenomenon is a phenomenon that, when the same experience is repeated several times, can occur slightly differently each time.

Obviously, there is not a single phenomenon in nature in which elements of randomness are not present to one degree or another, but in different situations we take them into account in different ways. Thus, in a number of practical problems they can be neglected and, instead of a real phenomenon, its simplified diagram – a “model” – can be considered, assuming that under given experimental conditions the phenomenon proceeds in a very definite way. At the same time, the most important, decisive factors characterizing the phenomenon are highlighted. It is this scheme for studying phenomena that is most often used in physics, technology, and mechanics; this is how the main pattern is revealed , characteristic of a given phenomenon and making it possible to predict the result of an experiment based on given initial conditions. And the influence of random, minor factors on the result of the experiment is taken into account here by random measurement errors (we will consider the method of their calculation below).

However, the described classical scheme of the so-called exact sciences is poorly suited for solving many problems in which numerous, closely intertwined random factors play a noticeable (often decisive) role. Here the random nature of the phenomenon comes to the fore, which can no longer be neglected. This phenomenon must be studied precisely from the point of view of the patterns inherent in it as a random phenomenon. In physics, examples of such phenomena are Brownian motion, radioactive decay, a number of quantum mechanical processes, etc.

The subject of study of biologists and physicians is a living organism, the origin, development and existence of which is determined by many and varied, often random external and internal factors. That is why the phenomena and events of the living world are also largely random in nature.

The elements of uncertainty, complexity, and multi-causality inherent in random phenomena necessitate the creation of special mathematical methods for studying these phenomena. The development of such methods and the establishment of specific patterns inherent in random phenomena are the main tasks of probability theory. It is characteristic that these patterns are fulfilled only when random phenomena are widespread. Moreover, the individual characteristics of individual cases seem to cancel each other out, and the average result for a mass of random phenomena turns out to be no longer random, but completely natural . To a large extent, this circumstance was the reason for the widespread use of probabilistic research methods in biology and medicine.

Let's consider the basic concepts of probability theory.

1.2. Probability of a random event

Each science that develops a general theory of any range of phenomena is based on a number of basic concepts. For example, in geometry these are the concepts of a point, a straight line; in mechanics - the concepts of force, mass, speed, etc. Basic concepts also exist in probability theory, one of them is a random event.

A random event is any phenomenon (fact) that may or may not occur as a result of experience (test).

Random events are indicated by letters A, B, C... etc. Here are some examples of random events:

A– the appearance of an eagle (coat of arms) when tossing a standard coin;

IN– the birth of a girl in a given family;

WITH– birth of a child with a predetermined body weight;

D– the occurrence of an epidemic disease in a given region during a certain period of time, etc.

The main quantitative characteristic of a random event is its probability. Let A- some random event. The probability of a random event A is a mathematical quantity that determines the possibility of its occurrence. It is designated R(A).

Let's consider two main methods for determining this value.

Classic definition of the probability of a random event usually based on the results of the analysis of speculative experiments (tests), the essence of which is determined by the conditions of the task. In this case, the probability of a random event P(A) is equal to:

Where m– the number of cases favorable to the occurrence of the event A; n– the total number of equally possible cases.

Example 1: A laboratory rat is placed in a maze in which only one of four possible paths leads to a food reward. Determine the probability of the rat choosing this path.

Solution: according to the conditions of the problem from four equally possible cases ( n=4) event A(rat finds food)
only one is favorable, i.e. m= 1 Then R(A) = R(rat finds food) = = 0.25 = 25%.

Example 2. There are 20 black and 80 white balls in an urn. One ball is drawn from it at random. Determine the probability that this ball will be black.

Solution: the number of all balls in the urn is the total number of equally possible cases n, i.e. n = 20 + 80 = 100, of which event A(removing the black ball) is possible only at 20, i.e. m= 20. Then R(A) = R(h.s.) = = 0.2 = 20%.

Let us list the properties of probability following from its classical definition – formula (1):

1. The probability of a random event is a dimensionless quantity.

2. The probability of a random event is always positive and less than one, i.e. 0< P (A) < 1.

3. The probability of a reliable event, i.e. an event that will definitely happen as a result of experience ( m = n), is equal to one.

4. Probability of an impossible event ( m= 0) is equal to zero.

5. The probability of any event is a value that is not negative and does not exceed one:
0 £ P (A) £ 1.

Statistical determination of the probability of a random event is used when it is impossible to use the classical definition (1). This is often the case in biology and medicine. In this case, the probability R(A) are determined by summarizing the results of actually conducted series of tests (experiments).

Let us introduce the concept of the relative frequency of occurrence of a random event. Let a series be carried out consisting of N experiments (number N can be selected in advance); event of interest to us A happened in M of them ( M < N). Ratio of number of experiments M, in which this event occurred, to the total number of experiments performed N is called the relative frequency of occurrence of a random event A in this series of experiments - R* (A)

R*(A) = .

It has been established experimentally that if a series of tests (experiments) are carried out under the same conditions and in each of them the number N is sufficiently large, then the relative frequency exhibits the property of stability : it changes little from episode to episode , approaching with increasing number of experiments to some constant value . It is taken as the statistical probability of a random event A:

R(A)= lim , with N , (2)

So, statistical probability R(A) random event A name the limit to which the relative frequency of occurrence of this event tends with an unlimited increase in the number of trials (with N → ∞).

Approximately the statistical probability of a random event is equal to the relative frequency of occurrence of this event over a large number of trials:

R(A)≈ P*(A)= (for large N) (3)

For example, in experiments on coin tossing, the relative frequency of the coat of arms falling out with 12,000 tosses turned out to be equal to 0.5016, and with 24,000 tosses - 0.5005. In accordance with formula (1):

P(coat of arms) = = 0.5 = 50%

Example . During a medical examination of 500 people, 5 of them were diagnosed with a tumor in the lungs (l.l.). Determine the relative frequency and probability of this disease.

Solution: according to the problem conditions M = 5, N= 500, relative frequency R*(o.l.) = M/N= 5/500 = 0.01; because the N is sufficiently large, we can assume with good accuracy that the probability of having a tumor in the lungs is equal to the relative frequency of this event:

R(o.l.) = R*(v.l.) = 0.01 = 1%.

The previously listed properties of the probability of a random event are preserved in the statistical determination of this quantity.

1.3. Types of random events. Basic theorems of probability theory

All random events can be divided into:

¾ incompatible;

¾ independent;

¾ dependent.

Each type of event has its own characteristics and theorems of probability theory.

1.3.1. Incompatible random events. Probability addition theorem

Random events (A, B, C,D...) are called incompatible , if the occurrence of one of them excludes the occurrence of other events in the same trial.

Example1 . A coin is tossed. When it falls, the appearance of the “coat of arms” eliminates the appearance of “tails” (the inscription that determines the price of the coin). The events “the coat of arms fell” and “the head fell” are incompatible.

Example 2 . A student receiving a grade of “2”, or “3”, or “4”, or “5” on one exam are incompatible events, since one of these grades excludes the other on the same exam.

For incompatible random events, probability addition theorem: probability of occurrence one, but no matter which one, of several incompatible events A1, A2, A3 ... Ak equal to the sum of their probabilities:

P(A1or A2...or Ak) = P(A1) + P(A2) + …+ P(Ak). (4)

Example 3. An urn contains 50 balls: 20 white, 20 black and 10 red. Find the probability of white occurrence (event A) or red ball (event IN), when a ball is drawn at random from the urn.

Solution: R(A or B)= P(A)+ R(IN);

R(A) = 20/50 = 0,4;

R(IN) = 10/50 = 0,2;

R(A or IN)= P(b. sh. or k. sh.) = 0,4 + 0,2 = 0,6 = 60%.

Example 4 . There are 40 children in the class. Of these, 8 boys are aged from 7 to 7.5 years ( A) and 10 girls ( IN). Find the probability of having children of this age in the class.

Solution: R(A)= 8/40 = 0.2; R(IN) = 10/40 = 0,25.

P(A or B) = 0.2 + 0.25 = 0.45 = 45%

The next important concept is complete group of events: several incompatible events form a complete group of events if only one of the events of this group and no other can occur as a result of each trial.

Example 5 . The shooter fired a shot at the target. One of the following events will definitely happen: getting into the “ten”, “nine”, “eight”,..., “one” or miss. These 11 incompatible events form a complete group.

Example 6 . In a university exam, a student can receive one of the following four grades: 2, 3, 4 or 5. These four incompatible events also form a complete group.

If incompatible events A1, A2...Ak form a complete group, then the sum of the probabilities of these events is always equal to one:

R(A1)+ R(A2)+ … R(Ak) = 1, (5)

This statement is often used in solving many applied problems.

If two events are the only possible and incompatible, then they are called opposite and denoted A And . Such events form a complete group, so the sum of their probabilities is always equal to one:

R(A)+ R() = 1. (6)

Example 7. Let R(A) – the probability of death due to a certain disease; it is known and equal to 2%. Then the probability of a successful outcome for this disease is 98% ( R() = 1 – R(A) = 0.98), since R(A) + R() = 1.

1.3.2. Independent random events. Probability multiplication theorem

Random events are called independent if the occurrence of one of them does not in any way affect the probability of the occurrence of other events.

Example 1 . If there are two or more urns with colored balls, then drawing any ball from one urn will not affect the probability of drawing other balls from the remaining urns.

For independent events it is true probability multiplication theorem: joint probability(simultaneous)the occurrence of several independent random events is equal to the product of their probabilities:

P(A1 and A2 and A3 ... and Ak) = P(A1) ∙P(A2) ∙…∙P(Ak). (7)

The joint (simultaneous) occurrence of events means that events occur and A1, And A2, And A3… And Ak .

Example 2 . There are two urns. One contains 2 black and 8 white balls, the other contains 6 black and 4 white balls. Let the event A-choosing a white ball at random from the first urn, IN- from the second. What is the probability of choosing a white ball at random from these urns at the same time, i.e., what is equal to R (A And IN)?

Solution: probability of drawing a white ball from the first urn
R(A) = = 0.8 from the second – R(IN) = = 0.4. The probability of simultaneously drawing a white ball from both urns is
R(A And IN) = R(AR(IN) = 0,8∙ 0,4 = 0,32 = 32%.

Example 3: A diet low in iodine causes enlargement of the thyroid gland in 60% of animals in a large population. For the experiment, 4 enlarged glands are needed. Find the probability that 4 randomly selected animals will have an enlarged thyroid gland.

Solution: Random event A– random selection of an animal with an enlarged thyroid gland. According to the conditions of the problem, the probability of this event R(A) = 0.6 = 60%. Then the probability of the joint occurrence of four independent events - a random selection of 4 animals with an enlarged thyroid gland - will be equal to:

R(A 1 and A 2 and A 3 and A 4) = 0,6 ∙ 0,6 ∙0,6 ∙ 0,6=(0,6)4 ≈ 0,13 = 13%.

1.3.3. Dependent events. Probability multiplication theorem for dependent events

Random events A and B are called dependent if the occurrence of one of them, for example, A, changes the probability of the occurrence of another event, B. Therefore, two probability values ​​are used for dependent events: unconditional and conditional probabilities .

If A And IN dependent events, then the probability of the event occurring IN first (i.e. before the event A) is called unconditional probability this event is designated R(IN). Probability of an event occurring IN provided that the event A has already happened, it's called conditional probability events IN and is designated R(IN/A) or RA(IN).

Unconditional - R(A) and conditional – R(A/B) probability for an event A.

Probability multiplication theorem for two dependent events: the probability of the simultaneous occurrence of two dependent events A and B is equal to the product of the unconditional probability of the first event by the conditional probability of the second:

R(A and B)= P(A)∙P(V/A) , (8)

A, or

R(A and B)= P(IN)∙P(A/B), (9)

if the event occurs first IN.

Example 1. There are 3 black balls and 7 white balls in an urn. Find the probability that 2 white balls will be drawn from this urn one after the other (without the first ball being returned to the urn).

Solution: probability of getting the first white ball (event A) is equal to 7/10. After it is removed, there are 9 balls left in the urn, 6 of which are white. Then the probability of the second white ball appearing (event IN) is equal to R(IN/A) = 6/9, and the probability of getting two white balls in a row is

R(A And IN) = R(A)∙R(IN/A) = = 0,47 = 47%.

The given theorem for multiplying probabilities for dependent events can be generalized to any number of events. Specifically, for three events related to each other:

R(A And IN And WITH)= P(A)∙ R(V/A)∙ R(S/AB). (10)

Example 2. An outbreak of an infectious disease occurred in two kindergartens, each attended by 100 children. The proportions of the sick are 1/5 and 1/4, respectively, and in the first institution 70%, and in the second - 60% of the sick - children under 3 years of age. One child is randomly selected. Determine the probability that:

1) the selected child belongs to the first kindergarten (event A) and sick (event IN).

2) a child from the second kindergarten is selected (event WITH), sick (event D) and older than 3 years (event E).

Solution. 1) the required probability –

R(A And IN) = R(A) ∙ R(IN/A) = = 0,1 = 10%.

2) the required probability:

R(WITH And D And E) = R(WITH) ∙ R(D/C) ∙ R(E/CD) = = 5%.

1.4. Bayes formula

If the probability of co-occurrence of dependent events A And IN does not depend on the order in which they occur, then R(A And IN)= P(A)∙P(V/A)= P(IN) × R(A/B). In this case, the conditional probability of one of the events can be found by knowing the probabilities of both events and the conditional probability of the second:

R(V/A) = (11)

A generalization of this formula for the case of many events is the Bayes formula.

Let " n» incompatible random events Н1, Н2, …, Нn, form a complete group of events. The probabilities of these events are R(H1), R(H2), …, R(Nn) are known and since they form a complete group, then = 1.

Some random event A related to events Н1, Н2, …, Нn, and the conditional probabilities of the occurrence of the event are known A with each of the events Ni, i.e. known R(A/H1), R(A/H2), …, R(A/Nn). In this case, the sum of conditional probabilities R(A/Ni) may not be equal to unity, i.e. ≠ 1.

Then the conditional probability of the event occurring Ni when an event is realized A(i.e., provided that the event A happened) is determined by Bayes' formula :

Moreover, for these conditional probabilities .

Bayes' formula has found wide application not only in mathematics, but also in medicine. For example, it is used to calculate the probabilities of certain diseases. So, if N 1,…, Nn– expected diagnoses for this patient, A– some sign related to them (symptom, a certain indicator of a blood test, urine test, detail of an x-ray, etc.), and conditional probabilities R(A/Ni) manifestations of this symptom with each diagnosis Ni (i = 1,2,3,…n) are known in advance, then the Bayes formula (12) allows us to calculate the conditional probabilities of diseases (diagnoses) R(Ni/A) after it has been established that the characteristic feature A present in the patient.

Example 1. During the initial examination of the patient, 3 diagnoses are assumed N 1, N 2, N 3. Their probabilities, according to the doctor, are distributed as follows: R(N 1) = 0,5; R(N 2) = 0,17; R(N 3) = 0.33. Therefore, the first diagnosis seems tentatively most likely. To clarify it, for example, a blood test is prescribed, in which an increase in ESR is expected (event A). It is known in advance (based on research results) that the probabilities of an increase in ESR in suspected diseases are equal:

R(A/N 1) = 0,1; R(A/N 2) = 0,2; R(A/N 3) = 0,9.

The resulting analysis recorded an increase in ESR (event A happened). Then the calculation using the Bayes formula (12) gives the probabilities of expected diseases with an increased ESR value: R(N 1/A) = 0,13; R(N 2/A) = 0,09;
R(N 3/A) = 0.78. These figures show that, taking into account laboratory data, the most realistic is not the first, but the third diagnosis, the probability of which has now turned out to be quite high.

The above example is the simplest illustration of how, using the Bayes formula, you can formalize a doctor’s logic when making a diagnosis and, thanks to this, create computer diagnostic methods.

Example 2. Determine the probability that estimates the degree of risk of perinatal* child mortality in women with an anatomically narrow pelvis.

Solution: let the event N 1 – successful birth. According to clinical reports, R(N 1) = 0.975 = 97.5%, then if H2– the fact of perinatal mortality, then R(N 2) = 1 – 0,975 = 0,025 = 2,5 %.

Let's denote A– the fact that a woman in labor has a narrow pelvis. From the studies carried out we know: a) R(A/N 1) – probability of a narrow pelvis during a favorable birth, R(A/N 1) = 0.029, b) R(A/N 2) – probability of a narrow pelvis with perinatal mortality,
R(A/N 2) = 0.051. Then the desired probability of perinatal mortality in a woman in labor with a narrow pelvis is calculated using the Bays formula (12) and is equal to:


Thus, the risk of perinatal mortality in an anatomically narrow pelvis is significantly higher (almost twice) than the average risk (4.4% versus 2.5%).

Such calculations, usually performed using a computer, underlie methods for forming groups of patients at increased risk associated with the presence of a particular aggravating factor.

The Bayes formula is very useful for assessing many other medical and biological situations, which will become obvious when solving the problems given in the manual.

1.5. About random events with probabilities close to 0 or 1

When solving many practical problems, one has to deal with events whose probability is very small, that is, close to zero. Based on experience regarding such events, the following principle has been adopted. If a random event has a very low probability, then we can practically assume that it will not occur in a single test, in other words, the possibility of its occurrence can be neglected. The answer to the question of how small this probability should be is determined by the essence of the problems being solved and by how important the result of the prediction is for us. For example, if the probability that a parachute will not open during a jump is 0.01, then the use of such parachutes is unacceptable. However, the same 0.01 probability that a long-distance train will arrive late makes us almost certain that it will arrive on time.

A sufficiently small probability at which (in a given specific problem) an event can be considered practically impossible is called level of significance. In practice, the significance level is usually taken equal to 0.01 (one percent significance level) or 0.05 (five percent significance level), much less often it is taken equal to 0.001.

The introduction of a significance level allows us to state that if some event A almost impossible, then the opposite event - practically reliable, i.e. for him R() » 1.

ChapterII. RANDOM VARIABLES

2.1. Random variables, their types

In mathematics, quantity is a general name for various quantitative characteristics of objects and phenomena. Length, area, temperature, pressure, etc. are examples of different quantities.

A quantity that takes on different numerical values ​​under the influence of random circumstances are called random variables. Examples of random variables: the number of patients at a doctor's appointment; exact dimensions of human internal organs, etc.

Distinguish between discrete and continuous random variables .

A random variable is called discrete if it takes only certain distinct values ​​that can be identified and enumerated.

Examples of a discrete random variable are:

– the number of students in the audience – can only be a positive integer: 0,1,2,3,4….. 20…..;

– the number that appears on the top face when throwing a die – can only take integer values ​​from 1 to 6;

– relative frequency of hitting the target with 10 shots – its values: 0; 0.1; 0.2; 0.3…1

– the number of events occurring over the same periods of time: heart rate, number of ambulance calls per hour, number of operations per month with a fatal outcome, etc.

A random variable is called continuous if it can take on any value within a certain interval, which sometimes has clearly defined boundaries and sometimes does not.*. Continuous random variables include, for example, body weight and height of adults, body weight and brain volume, the quantitative content of enzymes in healthy people, the size of blood cells, R N blood, etc.

The concept of a random variable plays a decisive role in modern probability theory, which has developed special techniques for the transition from random events to random variables.

If a random variable depends on time, then we can talk about a random process.

2.2. Distribution law of a discrete random variable

To give a complete description of a discrete random variable, it is necessary to indicate all its possible values ​​and their probabilities.

The correspondence between the possible values ​​of a discrete random variable and their probabilities is called the distribution law of this variable.

Let us denote the possible values ​​of the random variable X through Xi, and the corresponding probabilities – through Ri *. Then the distribution law of a discrete random variable can be specified in three ways: in the form of a table, graph or formula.

In a table called a distribution series, lists all possible values ​​of a discrete random variable X and the corresponding probabilities R(X):

X

…..

…..

P(X)

…..

…..

In this case, the sum of all probabilities Ri must be equal to unity (normalization condition):

Ri = p1 + p2 + ... + pn = 1. (13)

Graphically the law is represented by a broken line, which is usually called the distribution polygon (Fig. 1). Here, all possible values ​​of the random variable are plotted along the horizontal axis Xi, , and along the vertical axis – the corresponding probabilities Ri

Analytically the law is expressed by the formula. For example, if the probability of hitting the target with one shot is R, then the probability of hitting the target 1 time at n shots is given by the formula R(n) = n qn-1 × p, Where q= 1 – p– probability of miss with one shot.

2.3. Distribution law of a continuous random variable. Probability density function

For continuous random variables, it is impossible to apply the distribution law in the forms given above, since such a variable has an innumerable (“uncountable”) set of possible values ​​that completely fill a certain interval. Therefore, it is impossible to create a table that lists all its possible values, or to construct a distribution polygon. In addition, the probability of any particular value is very small (close to 0)*. At the same time, different regions (intervals) of possible values ​​of a continuous random variable are not equally probable. Thus, in this case, too, a certain law of distribution operates, although not in the previous sense.

Consider a continuous random variable X, the possible values ​​of which completely fill a certain interval (A, b)**. The probability distribution law of such a value should make it possible to find the probability of its value falling into any given interval ( x1, x2), lying inside ( A,b), Fig.2.

This probability is denoted R(x1< Х < х2 ), or
R(x1£ X£ x2).

Let us first consider a very small range of values X– from X before ( x +DX); see Fig.2. Low probability dR that the random variable X will take some value from the interval ( x, x +DX), will be proportional to the size of this interval DX:dR~ DX, or by introducing the proportionality coefficient f, which itself may depend on X, we get:

dP =f(X) × D x =f(x) × dx (14)

The function introduced here f(X) is called probability distribution density random variable X, or, in short, probability density, distribution density. Equation (13) is a differential equation, the solution of which gives the probability of hitting the value X in the interval ( x1,x2):

R(x1<X<x2) = f(X) dX. (15)

Graphically probability R(x1<X<x2) is equal to the area of ​​a curvilinear trapezoid bounded by the abscissa axis of the curve f(X) and straight X = x1 and X = x2(Fig. 3). This follows from the geometric meaning of the definite integral (15) Curve f(X) is called a distribution curve.

From (15) it follows that if the function is known f(X), then, by changing the limits of integration, we can find the probability for any intervals of interest to us. Therefore, it is the task of the function f(X) completely determines the distribution law for continuous random variables.

For the probability density f(X) the normalization condition must be satisfied in the form:

f(X) dx = 1, (16)

if it is known that all values X lie in the interval ( A,b), or in the form:

f(X) dx = 1, (17)

if the interval boundaries for values X definitely uncertain. Conditions for normalizing the probability density (16) or (17) are a consequence of the fact that the values ​​of the random variable X reliably lie within ( A,b) or (-¥, +¥). From (16) and (17) it follows that the area of ​​the figure bounded by the distribution curve and the x-axis is always equal to 1 .

2.4. Basic numerical characteristics of random variables

The results presented in paragraphs 2.2 and 2.3 show that a complete description of discrete and continuous random variables can be obtained by knowing the laws of their distribution. However, in many practically significant situations, the so-called numerical characteristics of random variables are used; the main purpose of these characteristics is to express in a concise form the most significant features of the distribution of random variables. It is important that these parameters represent specific (constant) values ​​that can be assessed using data obtained in experiments. These estimates are dealt with by “Descriptive Statistics”.

In probability theory and mathematical statistics, quite a lot of different characteristics are used, but we will consider only the most used ones. Moreover, only for some of them we will present the formulas by which their values ​​are calculated; in other cases, we will leave the calculations to the computer.

Let's consider position characteristics – mathematical expectation, mode, median.

They characterize the position of a random variable on the number axis , i.e., they indicate some approximate value around which all possible values ​​of the random variable are grouped. Among them, the most important role is played by the mathematical expectation M(X).

In our practical activities, we often encounter phenomena the outcome of which cannot be predicted, the outcome of which depends on chance. Probability theory is a branch of mathematics in which random phenomena (events) are studied and patterns are identified when they are repeated en masse. The basic concept of probability theory is the probability of an event (relative frequency of an event) - an objective measure of the possibility of a given event occurring.

Events are usually denoted in capital letters of the Latin alphabet: A, B, C, D. Let’s list main types of random events :

  • events are called incompatible , if no two of them can occur together in a given trial. For example, when tossing a coin, the appearance of a number excludes the simultaneous appearance of a coat of arms;
  • two events are called joint , if the occurrence of one of them does not exclude the occurrence of another event in the same test (experience);
  • the event is called reliable , if it occurs in a given test, it is mandatory. For example, winning a win-win lottery ticket is a reliable event;
  • the event is called impossible , if it cannot occur in this test. For example, when throwing a die, it is impossible to get 7 points;
  • two events are called opposite (A and A), if in a given test they are incompatible and one of them necessarily occurs. The probabilities of opposite events add up to 1;
  • event B is called independent from event A, if the occurrence of event A does not change the probability of event B: P A (B) = P (B). Otherwise, event B is called dependent from event A;

Full event system A 1, A 2, A 3, ..., An is a set of incompatible events, the occurrence of at least one of which is obligatory during a given test (experience).

Each event A is associated with a certain measure P(A), which is called the probability of this event and which satisfies the following axioms:

  • for any event 0 ≤ P(A) ≤ 1;
  • the probability of an impossible event is zero, P(A)=0;
  • the probability of a reliable event is equal to one, P(A)=1.

There are classical and geometric methods for calculating the probability of an event.

With the classical method of counting the probability of event A is calculated by the formula: P(A)=m/n, Where:

  • all elementary outcomes are equally possible, i.e. neither of them is more possible than the other;
  • m is the number of elementary test outcomes favorable to the occurrence of event A;
  • n is the total number of all possible elementary test outcomes.

To count n and m are often used concepts and formulas of combinatorics :

  • n-factorial is the product of all natural numbers from one to n inclusive: n! = 1*2*3*…*(n-1)*n. For example: 4!=1*2*3*4=24, 1!=1, 0!=1
  • permutation of n elements – a combination of n elements that differ from each other only in the order of the elements. The number of all possible permutations is calculated using the formula: Pn=n!
  • permutation with repetitions – let n 1 elements of the first type, n 2 - of the second type, ..., n k - of the kth type be given, n elements in total. Ways of placing them in different places are called permutations with repetitions. The number of all permutations with repetitions is calculated using the formula: Pn(n 1 ,n 2 ,…,n k) = n! / n 1 !n 2 !...n k !
  • placement – combinations of n elements of m (m And n m = n!/(n-m)!, Where
    n is the number of all available elements, m is the number of elements in each combination.
    When n=m the placement becomes a permutation. If you do not take into account the order of the elements in the arrangement, but only take into account its composition, then you get a combination.
  • combinations – all possible combinations of n elements of m (m With n m = n! /m!(n-m)! = A n m / P m

Geometric method of calculating probability is used when the elementary results of an experiment can be interpreted as points of a line segment, figure or body.

Let the segment l forms part of the segment L. If we assume that the probability of a point falling on the segment l is proportional to the length of this segment, then the probability of a point falling on the segment l is determined by the equality: P = Length l/ Length L.

The probability of a point falling into a plane figure g that is part of a plane figure G: P = Area g/Area G.

The probability of a point falling into a spatial figure υ, which is part of the figure V: P = Volume υ / Volume V.

Examples of solving problems on the topic “Elements of combinatorics. Events and their probabilities"

Problem 1

There are 30 people in the 11th grade. 18 people study English, 16 - German, 9 - both languages. How many people study a) only English, b) only German, c) do not study any language?

Solution.
a) since 18 people study English, 9 of them study both English and German, then 18–9 = 9 people study only English;
b) since 16 people study German, 9 of them study both German and English, then 16–9 = 7 people study only German;
c) since there are 30 people in the class, 9 of them study only English, 7 only German, 9 both languages, then 30 - (9+7+9) = 5 people do not study any language.

Problem 2

In how many ways can the letters in the word “ficus” be rearranged?

Solution. In this case, it is necessary to find the number of permutations of 5 letters, and since in the word “ficus” all the letters are different, the number of permutations is determined by the formula: P 5 =5!=1*2*3*4*5=120.

Problem 3

In how many ways can the letters in the word “answer” be rearranged?

Solution. You need to find the number of permutations of 5 letters, but unlike problem 2, there are repeated letters here - the letter “t” is repeated twice. Therefore, we determine the number of ways using the formula for permutations with repetitions: P 5 (1, 2, 1, 1) = 5! / 2! = 60.

Problem 4

There are only 25 tickets in the collection of mathematics tickets, 10 of them contain a question on derivatives. Find the probability that a student will not get a question on derivatives on a randomly selected exam ticket.

Solution. In this case, the number of favorable outcomes is (25-10)=15, the total number of events is 25.
We find the probability of the event A = (the student will not get a question on the derivative) as the ratio: P(A)=15/25=0.6.

Problem 5

The box contains 15 parts, including 8 painted ones. The assembler randomly removes three parts. Find the probability that the extracted parts will be painted.

Solution. Event A = (three painted parts are extracted).

The total number of all possible elementary test outcomes is equal to the number of ways in which 3 parts can be extracted from 15:
n = C 15 3 =15! / 3!(15-3)!=15! / (3!*12!) = 13*7*5=455.
The number of favorable outcomes is equal to the number of ways in which 3 parts can be extracted from 8 painted ones:
m = C 8 3 =8! / 3!(8-3)!= 8! / (3!*5!)=7*8=56.

We find the probability of event A as the ratio: P(A) = m/n= 56/455≈0.12

Problem 6

Among the 17 students in the group, 8 of whom are girls, 7 tickets to the theater are drawn. What is the probability that among the ticket holders there will be 4 girls and 3 boys?

Solution. Event A = (there are exactly 4 girls among ticket holders).

The total number of possible elementary outcomes of the drawing is equal to the number of ways in which 7 people can be selected from all students in the group, i.e. from 17: n = C 17 7 =17! / 7!(17-7)!= 17! / (7!*10!)=19448.

We will find the number of favorable outcomes (among 7 ticket holders there are 4 girls and 3 boys), taking into account that 4 girls out of 8 can be chosen C 8 in 4 ways, and 3 boys out of 9 can be chosen C 9 in 3 ways. Therefore, m = C 8 4 * C 9 3 = 8!9! / 4!(8-4)!3!(9-3)! = 5880.

We find the probability of event A as the ratio: P(A) = m/n= 5880/19448≈0.3

It is unlikely that many people think about whether it is possible to calculate events that are more or less random. In simple terms, is it possible to know which side of the cube will come up next? This is the question asked by two great scientists who laid the foundation for such a science as the theory of probability, in which the probability of an event is studied quite extensively.

Origin

If you try to define such a concept as probability theory, you will get the following: this is one of the branches of mathematics that studies the constancy of random events. Of course, this concept does not really reveal the whole essence, so it is necessary to consider it in more detail.

I would like to start with the creators of the theory. As mentioned above, there were two of them, and they were among the first to try to calculate the outcome of this or that event using formulas and mathematical calculations. In general, the beginnings of this science appeared in the Middle Ages. At that time, various thinkers and scientists tried to analyze gambling games, such as roulette, craps, and so on, thereby establishing the pattern and percentage of a particular number falling out. The foundation was laid in the seventeenth century by the above-mentioned scientists.

At first, their works could not be considered great achievements in this field, because all they did were simply empirical facts, and experiments were carried out visually, without using formulas. Over time, it was possible to achieve great results, which appeared as a result of observing the throwing of dice. It was this tool that helped to derive the first intelligible formulas.

Like-minded people

It is impossible not to mention such a person as Christiaan Huygens in the process of studying a topic called “probability theory” (the probability of an event is covered precisely in this science). This person is very interesting. He, like the scientists presented above, tried to derive the pattern of random events in the form of mathematical formulas. It is noteworthy that he did not do this together with Pascal and Fermat, that is, all his works did not intersect with these minds. Huygens deduced

An interesting fact is that his work came out long before the results of the discoverers’ work, or rather, twenty years earlier. Among the identified concepts, the most famous are:

  • the concept of probability as the value of chance;
  • mathematical expectation for discrete cases;
  • theorems of multiplication and addition of probabilities.

It is also impossible not to remember who also made a significant contribution to the study of the problem. Conducting his own tests, independent of anyone, he was able to present a proof of the law of large numbers. In turn, the scientists Poisson and Laplace, who worked at the beginning of the nineteenth century, were able to prove the original theorems. It was from this moment that probability theory began to be used to analyze errors in observations. Russian scientists, or rather Markov, Chebyshev and Dyapunov, could not ignore this science. Based on the work done by great geniuses, they established this subject as a branch of mathematics. These figures worked already at the end of the nineteenth century, and thanks to their contribution, the following phenomena were proven:

  • law of large numbers;
  • Markov chain theory;
  • central limit theorem.

So, with the history of the birth of science and with the main people who influenced it, everything is more or less clear. Now the time has come to clarify all the facts.

Basic Concepts

Before touching on laws and theorems, it is worth studying the basic concepts of probability theory. The event plays a leading role in it. This topic is quite voluminous, but without it it will not be possible to understand everything else.

An event in probability theory is any set of outcomes of an experiment. There are quite a few concepts of this phenomenon. Thus, the scientist Lotman, working in this area, said that in this case we are talking about what “happened, although it might not have happened.”

Random events (the theory of probability pays special attention to them) is a concept that implies absolutely any phenomenon that has the opportunity to occur. Or, conversely, this scenario may not happen if many conditions are met. It is also worth knowing that it is random events that capture the entire volume of phenomena that have occurred. The theory of probability indicates that all conditions can be repeated constantly. It is their conduct that is called “experience” or “test”.

A reliable event is a phenomenon that is one hundred percent likely to happen in a given test. Accordingly, an impossible event is one that will not happen.

The combination of a pair of actions (conditionally, case A and case B) is a phenomenon that occurs simultaneously. They are designated as AB.

The sum of pairs of events A and B is C, in other words, if at least one of them happens (A or B), then C will be obtained. The formula for the described phenomenon is written as follows: C = A + B.

Incongruent events in probability theory imply that two cases are mutually exclusive. Under no circumstances can they happen at the same time. Joint events in probability theory are their antipode. What is meant here is that if A happened, then it does not prevent B in any way.

Opposite events (probability theory considers them in great detail) are easy to understand. The best way to understand them is by comparison. They are almost the same as incompatible events in probability theory. But their difference lies in the fact that one of many phenomena must happen in any case.

Equally probable events are those actions whose repetition is equal. To make it clearer, you can imagine tossing a coin: the loss of one of its sides is equally likely to fall out of the other.

It is easier to consider an auspicious event with an example. Let's say there is an episode B and an episode A. The first is the roll of the dice with an odd number appearing, and the second is the appearance of the number five on the die. Then it turns out that A favors B.

Independent events in probability theory are projected only onto two or more cases and imply the independence of any action from another. For example, A is the loss of heads when tossing a coin, and B is the drawing of a jack from the deck. They are independent events in probability theory. At this point it became clearer.

Dependent events in probability theory are also permissible only for a set of them. They imply the dependence of one on the other, that is, phenomenon B can occur only if A has already happened or, conversely, has not happened, when this is the main condition for B.

The outcome of a random experiment consisting of one component is elementary events. The theory of probability explains that this is a phenomenon that happened only once.

Basic formulas

So, the concepts of “event” and “probability theory” were discussed above; the definition of the basic terms of this science was also given. Now it’s time to get acquainted directly with the important formulas. These expressions mathematically confirm all the main concepts in such a complex subject as probability theory. The probability of an event plays a huge role here too.

It’s better to start with the basic ones. And before you start with them, it’s worth considering what they are.

Combinatorics is primarily a branch of mathematics; it deals with the study of a huge number of integers, as well as various permutations of both the numbers themselves and their elements, various data, etc., leading to the appearance of a number of combinations. In addition to probability theory, this branch is important for statistics, computer science and cryptography.

So, now we can move on to presenting the formulas themselves and their definition.

The first of them will be the expression for the number of permutations, it looks like this:

P_n = n ⋅ (n - 1) ⋅ (n - 2)…3 ⋅ 2 ⋅ 1 = n!

The equation is applied only if the elements differ only in the order of their arrangement.

Now the placement formula will be considered, it looks like this:

A_n^m = n ⋅ (n - 1) ⋅ (n-2) ⋅ ... ⋅ (n - m + 1) = n! : (n - m)!

This expression is applicable not only to the order of placement of the element, but also to its composition.

The third equation from combinatorics, and it is also the last, is called the formula for the number of combinations:

C_n^m = n ! : ((n - m))! :m!

A combination refers to selections that are not ordered; accordingly, this rule applies to them.

It was easy to understand the combinatorics formulas; now you can move on to the classical definition of probabilities. This expression looks like this:

In this formula, m is the number of conditions favorable to event A, and n is the number of absolutely all equally possible and elementary outcomes.

There are a large number of expressions; the article will not cover all of them, but the most important ones will be touched upon, such as, for example, the probability of the sum of events:

P(A + B) = P(A) + P(B) - this theorem is for adding only incompatible events;

P(A + B) = P(A) + P(B) - P(AB) - and this one is for adding only compatible ones.

Probability of events happening:

P(A ⋅ B) = P(A) ⋅ P(B) - this theorem is for independent events;

(P(A ⋅ B) = P(A) ⋅ P(B∣A); P(A ⋅ B) = P(A) ⋅ P(A∣B)) - and this one is for the dependent.

The list of events will be completed by the formula of events. Probability theory tells us about Bayes' theorem, which looks like this:

P(H_m∣A) = (P(H_m)P(A∣H_m)) : (∑_(k=1)^n P(H_k)P(A∣H_k)),m = 1,..., n

In this formula, H 1, H 2, ..., H n is a complete group of hypotheses.

Examples

If you carefully study any section of mathematics, it is not complete without exercises and sample solutions. So is the theory of probability: events and examples here are an integral component that confirms scientific calculations.

Formula for the number of permutations

Let's say there are thirty cards in a deck of cards, starting with a value of one. Next question. How many ways are there to stack the deck so that cards with value one and two are not next to each other?

The task has been set, now let's move on to solving it. First you need to determine the number of permutations of thirty elements, for this we take the formula presented above, it turns out P_30 = 30!.

Based on this rule, we find out how many options there are to fold the deck in different ways, but we need to subtract from them those in which the first and second cards are next to each other. To do this, let's start with the option when the first is above the second. It turns out that the first card can take up twenty-nine places - from the first to the twenty-ninth, and the second card from the second to the thirtieth, making a total of twenty-nine places for a pair of cards. In turn, the rest can accept twenty-eight places, in any order. That is, to rearrange twenty-eight cards, there are twenty-eight options P_28 = 28!

As a result, it turns out that if we consider the solution when the first card is above the second, there will be 29 ⋅ 28 extra possibilities! = 29!

Using the same method, you need to calculate the number of redundant options for the case when the first card is under the second. It also turns out to be 29 ⋅ 28! = 29!

It follows from this that there are 2 ⋅ 29 extra options!, while the necessary ways to assemble a deck are 30! - 2 ⋅ 29!. All that remains is to count.

30! = 29! ⋅ 30; 30!- 2 ⋅ 29! = 29! ⋅ (30 - 2) = 29! ⋅ 28

Now you need to multiply all the numbers from one to twenty-nine, and then at the end multiply everything by 28. The answer is 2.4757335 ⋅〖10〗^32

Example solution. Formula for placement number

In this problem, you need to find out how many ways there are to put fifteen volumes on one shelf, but provided that there are thirty volumes in total.

The solution to this problem is a little simpler than the previous one. Using the already known formula, it is necessary to calculate the total number of arrangements of thirty volumes of fifteen.

A_30^15 = 30 ⋅ 29 ⋅ 28⋅... ⋅ (30 - 15 + 1) = 30 ⋅ 29 ⋅ 28 ⋅ ... ⋅ 16 = 202 843 204 931 727 360 000

The answer, accordingly, will be equal to 202,843,204,931,727,360,000.

Now let's take a slightly more difficult task. You need to find out how many ways there are to arrange thirty books on two bookshelves, given that one shelf can only hold fifteen volumes.

Before starting the solution, I would like to clarify that some problems can be solved in several ways, and this one has two methods, but both use the same formula.

In this problem, you can take the answer from the previous one, because there we calculated how many times you can fill a shelf with fifteen books in different ways. It turned out A_30^15 = 30 ⋅ 29 ⋅ 28 ⋅ ... ⋅ (30 - 15 + 1) = 30 ⋅ 29 ⋅ 28 ⋅ ...⋅ 16.

We will calculate the second shelf using the permutation formula, because fifteen books can be placed in it, while only fifteen remain. We use the formula P_15 = 15!.

It turns out that the total will be A_30^15 ⋅ P_15 ways, but, in addition to this, the product of all numbers from thirty to sixteen will need to be multiplied by the product of numbers from one to fifteen, in the end you will get the product of all numbers from one to thirty, that is, the answer equals 30!

But this problem can be solved in another way - easier. To do this, you can imagine that there is one shelf for thirty books. All of them are placed on this plane, but since the condition requires that there be two shelves, we saw one long one in half, so we get two of fifteen. From this it turns out that there can be P_30 = 30 options for arrangement!.

Example solution. Formula for combination number

Now we will consider a version of the third problem from combinatorics. You need to find out how many ways there are to arrange fifteen books, provided that you need to choose from thirty absolutely identical ones.

To solve, of course, the formula for the number of combinations will be applied. From the condition it becomes clear that the order of the identical fifteen books is not important. Therefore, initially you need to find out the total number of combinations of thirty books of fifteen.

C_30^15 = 30 ! : ((30-15)) ! : 15 ! = 155 117 520

That's all. Using this formula, we were able to solve this problem in the shortest possible time; the answer, accordingly, is 155,117,520.

Example solution. Classic definition of probability

Using the formula above, you can find the answer to a simple problem. But this will help to clearly see and track the progress of actions.

The problem states that there are ten absolutely identical balls in the urn. Of these, four are yellow and six are blue. One ball is taken from the urn. You need to find out the probability of getting blue.

To solve the problem, it is necessary to designate getting the blue ball as event A. This experiment can have ten outcomes, which, in turn, are elementary and equally possible. At the same time, out of ten, six are favorable to event A. We solve using the formula:

P(A) = 6: 10 = 0.6

Applying this formula, we learned that the probability of getting the blue ball is 0.6.

Example solution. Probability of the sum of events

An option will now be presented that is solved using the sum-of-events probability formula. So, the condition is given that there are two boxes, the first contains one gray and five white balls, and the second contains eight gray and four white balls. As a result, they took one of them from the first and second boxes. You need to find out what is the chance that the balls you get will be gray and white.

To solve this problem, it is necessary to identify events.

  • So, A - took a gray ball from the first box: P(A) = 1/6.
  • A’ - took a white ball also from the first box: P(A") = 5/6.
  • B - a gray ball was removed from the second box: P(B) = 2/3.
  • B’ - took a gray ball from the second box: P(B") = 1/3.

According to the conditions of the problem, it is necessary for one of the phenomena to happen: AB’ or A’B. Using the formula, we get: P(AB") = 1/18, P(A"B) = 10/18.

Now the formula for multiplying the probability has been used. Next, to find out the answer, you need to apply the equation of their addition:

P = P(AB" + A"B) = P(AB") + P(A"B) = 11/18.

This is how you can solve similar problems using the formula.

Bottom line

The article presented information on the topic "Probability Theory", in which the probability of an event plays a vital role. Of course, not everything was taken into account, but, based on the presented text, you can theoretically familiarize yourself with this section of mathematics. The science in question can be useful not only in professional matters, but also in everyday life. With its help, you can calculate any possibility of any event.

The text also touched upon significant dates in the history of the formation of the theory of probability as a science, and the names of the people whose work was invested in it. This is how human curiosity led to the fact that people learned to calculate even random events. Once upon a time they were simply interested in this, but today everyone already knows about it. And no one will say what awaits us in the future, what other brilliant discoveries related to the theory under consideration will be made. But one thing is for sure - research does not stand still!

1. Random events

Probability theory is a branch of mathematics that studies the patterns of mass random events.

An event whose occurrence cannot be guaranteed is called random. The randomness of an event is determined by many reasons that exist objectively, but it is impossible to take them all into account, as well as the degree of their influence on the event under study. Such random events include: getting a particular number when throwing a dice, winning a lottery, the number of patients making an appointment with a doctor, etc.

And although in each specific case it is difficult to predict the outcome of the test, with a sufficiently large number of observations it is possible to establish the presence of some pattern. When tossing a coin, you will notice that the numbers of heads and tails appearing are approximately the same, and when throwing a dice, different faces also appear, approximately the same. This suggests that random phenomena have their own patterns, but they appear only with a large number of tests. The correctness of this is confirmed by the law of large numbers, which underlies the theory of probability.

Let's consider the basic terms and concepts of probability theory.

Test is a set of conditions under which a given random event can occur.

Event - it is a fact that, when certain conditions are met, may or may not occur. Events are indicated in capital letters of the Latin alphabet A, B, C...

For example, event A- birth of a boy, event IN - winning the lottery, event C - the number 4 falling out when throwing a die.

Events can be reliable, impossible and random.

Reliable event- this is an event that must certainly occur as a result of the test.

For example, if on a dice on all six sides . put the number 1, then the number 1 falling out when throwing a dice is a reliable event.

Impossible event - it is an event that cannot occur as a result of the test.

For example, in the previously discussed example, this is the appearance of any number other than 1.

Random event is an event that may or may not occur during testing. Certain events are realized with different possibilities.

For example, Rain is expected tomorrow afternoon. In this example, the coming of day is a test, and the falling of rain is a random event.

The events are called incompatible, if, as a result of this test, the appearance of one of them excludes the appearance of the other.

For example, When tossing a coin, getting heads and tails at the same time are incompatible events.

The events are called joint, if, as a result of this test, the appearance of one of them does not exclude the appearance of the other.


For example, When playing cards, the appearance of jack and spades are joint events.

The events are called equally possible, unless there is reason to believe that one of them occurs more often than the other!

For example, The loss of any side of a die is an equally possible event.

Events form full group of events, if at least one of them is sure to occur as a result of the test and any two of them are incompatible.

For example, With 10 shots at a target, 0 to 10 hits are possible. When throwing a die, a number from 1 to 6 can appear. These events form a complete group.

Events included in a complete group of pairwise incompatible and equally possible events are called outcomes, or elementary events. According to the definition of a reliable event, we can consider that an event consisting of the appearance of one, no matter which, of the events of the full group, is a reliable event.

For example, When one dice is thrown, the result is a number less than seven. This is an example of a reliable event.

A special case of events that form a complete group are opposite events.

Two incompatible events A and (read “not A”) are called opposite, if as a result of the test one of them must necessarily occur.

For example, if a scholarship is awarded only upon receipt of good and excellent grades in the exam, then the events “scholarship” and “unsatisfactory or satisfactory grade” are the opposite.

Event A called favorable event IN, if the event occurs A entails the occurrence of an event IN.

For example, When throwing a dice, the appearance of an odd number is favored by the events associated with the numbers 1,3 and 5.

2. Operations on events

Operations on events are similar to operations on sets.

Amount several events is an event consisting of the occurrence of at least one of them as a result of a test.

The sum of events can be indicated by the signs “+”, “È”, “or”.

Figure 1 shows the geometric interpretation using Euler-Venn diagrams. Sum of events A + B The entire shaded area will match.

Fig.1

Event intersection area A And IN corresponds to joint events that can happen simultaneously. Likewise for events A, B And WITH there are joint events A And IN; A And WITH; IN and C; A And IN And WITH, which can happen simultaneously.

For example, the urn contains white, red and blue balls. The following events are possible: A- a white ball is taken out; IN- a red ball is taken out; C - the blue ball is drawn. Event B + C means that an event has occurred - a colored ball is drawn or a non-white ball is drawn.

The work several events is an event that consists in the joint occurrence of all these events as a result of the test.

The production of events can be indicated by the signs “x”, “∩”, “and”.

The geometric interpretation of the product of events is presented in Fig. 2.

Fig.2

By producing events A And IN there will be a shaded area where the squares intersect A And IN. And for three events A And IN And WITH - the total area simultaneously included in all three events.

For example, Let a card be drawn at random from a deck of cards. Event A- a card of the spades suit is drawn; IN - the jack is taken out. Then the event A×B means the event - the jack of spades is taken out.

By difference two events A-B is an event consisting of outcomes included in A, but not included in IN.

In Fig. Figure 3 shows an illustration of the event difference using Euler-Venn diagrams.

Fig.3

The difference between two events A-B is the shaded area A without the part that is included in the event IN. The difference between the product of events A And IN and event WITH there will be a joint event area A and events IN without the event area sharing with it WITH.

For example, let the event occur when throwing a die A - the appearance of even numbers (2,4,6), and the event IN - numbers that are multiples of 3, i.e. (3, 6). Then the event A-B appearance of numbers (2,4).

3. Determining the probability of an event

Random events are realized with different possibilities. Some happen more often, others less often. To quantify the possibilities of realizing an event, the concept is introduced probability of an event.

Probability of event- this is a number characterizing the degree of possibility of an event occurring when tests are repeated many times.

Probability is indicated by the letter R(from English probability - probability). Probability is one of the basic concepts of probability theory. There are several definitions of this concept.

Classic The definition of probability is as follows. If all possible outcomes of a test are known and there is no reason to believe that one random event would occur more often than others, i.e. events are equally possible and incompatible, then it is possible to analytically determine the probability of an event.

Probability P(A) events A called the ratio of the number of favorable outcomes T to the total number of equally possible incompatible outcomes P:

Properties of probability:

1. Probability of a random event A is between 0 and 1.

2. The probability of a certain event is 1.

.

3. The probability of an impossible event is 0.

.

Random event -

Two events incompatible

Probability theory

Algebra of random events, Vienne-Euler diagrams.

Sum of events A and B is an event that occurs when either A or B or both events occur.

The product of A and B called an event that occurs if experience occurs both events.

Event Ā, opposite to event A is an event that occurs whenever event A does not occur.

A\B (complement A to B)– A happens, but B doesn’t happen

Classic definition of probability. Combinatorics.

– classical definition of probability.

m– total number of outcomes

n– the number of outcomes favorable to the occurrence of event A.

Combinatorics- a branch of mathematics that studies the arrangement of objects in accordance with special rules and counts the number of ways of such arrangements. Combinatorics originated in the 18th century. Considered as a branch of set theory.

Axiomatic construction of probability theory.

Axiom 1.“non-negativity axiom” P(A)≥0

Axiom 2.“normality axiom” P(Ω)=1

Axiom 3.“axiom of additivity” If events A and B are incompatible (AB=Ø), then P(A+B)=P(A)+P(B)

Theorem on the probability of a sum of events.

For any events P(A+B) = P(A) + P(B) – P(AB) (document in lecture)

Conditional probability. Dependent and independent events. Theorems on the probability of events occurring.

P(A|B) – probability of event A, if event B has already occurred – conditional probability.

Event A is called independent, from event B, if the probability of event A does not change depending on whether event B occurs or not.

Probability multiplication theorem: P(AB) = P(A|B) P(B) = P(B|A) P(A)

Theorem for multiplying the probabilities of independent events: P(AB) = P(A) P(B)

By definition of conditional probability,

Total probability formula.

There are events Н 1 , Н 2 ,…., Н n that are pairwise incompatible and form a complete group. Such events are called hypotheses. Let there be some event A. A=AN 1 +AN 2 +…+AN n (the terms of this sum are pairwise incompatible).

Bayes formula.

Н 1, Н 2,…., Н n A

Bernoulli scheme. Bernoulli's formula. Most likely number of successes.

Let a finite number n of consecutive tests be carried out, in each of which some event A can either occur “success” or not occur “failure”, and these tests satisfy the following conditions:

· Each trial is random relative to the event A.i.e. before the test it is impossible to say whether A will appear or not;

· Tests are carried out under identical conditions from a probabilistic point of view, i.e. the probability of success in each individual trial is p and does not change from trial to trial;

· The tests are independent, i.e. the outcome of any of them does not affect the outcomes of other tests.

This sequence of tests is called a Bernoulli scheme or binomial scheme, and the tests themselves are called Bernoulli tests.

To calculate the probability P n (k) that in a series of n Bernoulli tests there will be exactly k successful ones, the Bernoulli formula is used: (k = 0,1,2,...n).

10. The concept of a random variable. Discrete random variable, methods of specifying it: distribution series.

Random variable is a quantity that in each test (with each observation) takes one of its many possible values, it is not known in advance which one.

Discrete r.v.– r.v., the set of possible values ​​of which is finite or countable.

Distribution series r.v.(probability distribution series). The graph of the distribution series is specified by the distribution polygon - a broken line that connects points with coordinates (x i,p i)

X x 1 x 2 x 3 x k
P p 1 p2 p 3 p k

Distribution law r.v.: p k =P((X=x k ))

Random events, their classification. The concept of probability.

Random event - an event that, under the conditions of experience, may or may not occur. Moreover, it is not known in advance whether it will happen or not.

Two events incompatible if the appearance of one of them excludes the appearance of the other in the same experience.

Probability theory studies the patterns inherent in mass random phenomena. The basic concepts of probability theory were laid down in the correspondence between Pascal and Fermat. These concepts originated from attempts to describe gambling mathematically.

Did you like the article? Share with your friends!