Optimal system. Optimal control

Optimal system

an automatic control system that ensures the best (optimal) functioning of the controlled object from a certain point of view. Its characteristics and external disturbing influences can change in an unforeseen way, but, as a rule, under certain restrictions. The best functioning of the control system is characterized by the so-called. optimal control criterion (optimality criterion, objective function), which is a value that determines the effectiveness of achieving the control goal and depends on changes in time or space in the coordinates and parameters of the system. The optimality criterion can be various technical and economic indicators of the operation of an object: efficiency, speed, average or maximum deviation of system parameters from specified values, production cost, individual indicators of product quality or a general indicator of quality, etc. The optimality criterion can relate to both a transitional and a steady-state process, or both. There are regular and statistical optimality criteria. The first depends on regular parameters and on the coordinates of the controlled and control systems. The second one is used when the input signals ≈ random functions or/and it is necessary to take into account random disturbances generated by individual elements of the system. According to the mathematical description, the optimality criterion can be either a function of a finite number of parameters and coordinates of the controlled process, which takes an extreme value for optimal functioning of the system, or a functional of a function describing the control law; in this case, the form of this function is determined in which the functional takes an extreme value. To calculate O. s. use Pontryagin's maximum principle or the theory of dynamic programming.

Optimal functioning of complex objects is achieved by using self-adjusting (adaptive) control systems, which have the ability to automatically change the control algorithm, their characteristics or structure during operation in order to maintain the optimality criterion unchanged under arbitrarily changing system parameters and operating conditions. Therefore in general case O.S. consists of two parts: a constant (unchangeable), which includes the control object and some elements of the control system, and a variable (changeable), which combines the remaining elements. See also Optimal control.

M. M. Maisel.

Wikipedia

Optimal system

Under optimal system is understood as the best system in a certain sense.

In order to find the best (optimal) among the possible options of the system, some criterion is needed that characterizes the effectiveness of achieving the control goal. This criterion must be expressed in the form of a strict mathematical indicator - an optimality criterion that would unambiguously characterize any of the possible options for implementing the system.
The number of criteria may vary.

In the problem single-criteria optimization Each version of the system can be associated with a certain value of a physical quantity, a number. The best version of the system should be considered the one that gives, depending on the specific task and the adopted optimality criterion, the minimum or maximum value of the criterion. Thus, the control goal can be considered as achieving the extremum of the optimality criterion.

In tasks multicriteria optimization absolutely the best option it is impossible to choose a system, since when moving from one option to another, as a rule, the values ​​of some criteria improve, but the values ​​of others worsen. The composition of such criteria is called contradictory, and the finally chosen solution will always be a compromise.

Automatic control systems are usually designed based on the requirements to ensure certain quality indicators. In many cases, the necessary increase in dynamic accuracy and improvement of transient processes of automatic systems is achieved with the help of corrective devices.

Particularly broad opportunities for improving quality indicators are provided by the introduction into the circuit of an automatic system of open-loop compensation channels and differential connections, synthesized from one or another condition of error invariance with respect to the driving or disturbing influences. However, the effect of correction devices, open compensation channels and equivalent differential connections on the quality indicators of the automatic system depends on the level of signal limitation by nonlinear elements of the system. The output signals of differentiating devices, usually short in duration and significant in amplitude, are limited by the elements of the system and do not lead to an improvement in the quality indicators of the automatic system, in particular its speed. The best results in solving the problem of improving the quality indicators of automatic systems in the presence of signal limitations are obtained by the so-called optimal control.

In a broad sense, the word “optimal” means the best in the sense of some criterion of efficiency. With this interpretation, any scientifically based technical and economic system is optimal, since when choosing a system it is implied that it is in some respect better than others. The criteria by which the choice is made (optimality criteria) may be different. These criteria may be the quality of the dynamics of control processes, system reliability, energy consumption, its weight and dimensions, cost, etc., or a combination of these criteria with some weighting coefficients. In many cases, the necessary increase in dynamic accuracy and improvement of transient processes of automatic control systems is achieved with the help of corrective devices.

Particularly broad opportunities for improving quality indicators are provided by the introduction into automatic systems of open-loop compensation channels and differential connections, synthesized from one or another condition of error invariance with respect to the master or disturbing influences. However, the effect of correction devices, open compensation channels and equivalent differential connections on the performance indicators of automatic systems depends on the level of signal limitation by nonlinear elements of the system. The output signals of differentiating devices, usually short in duration and significant in amplitude, are limited by the elements of the system and do not lead to an improvement in the quality indicators of the automatic system, in particular its speed. The best results in solving the problem of increasing the quality indicators of automatic systems in the presence of signal limitations are obtained by the so-called optimal control.

The problem of synthesizing optimal systems was strictly formulated relatively recently, when the concept of optimality criterion was defined. Depending on the control goal, various technical or economic indicators of the controlled process can be selected as an optimality criterion. In optimal automatic systems, it is ensured not just a slight increase in one or another technical and economic quality indicator, but the achievement of its minimum or maximum possible value.

Optimal management is the one that is carried out in the best way according to certain indicators. Systems that implement optimal control are called optimal. The organization of optimal control is based on identifying and implementing the maximum capabilities of systems.

When developing optimal control systems, one of the most important steps is the formulation of an optimality criterion, which is understood as the main indicator that defines the optimization problem. It is by this criterion that the optimal system should function the best way.

Optimality criteria include a variety of technical and technical-economic indicators that express technical and economic benefits or, conversely, losses. Due to the contradictory requirements for automatic control systems, choosing an optimality criterion usually turns into a complex task that has controversial decision. For example, optimizing an automatic system based on reliability criteria may lead to an increase in the cost of the system and its complexity. On the other hand, simplifying the system will reduce a number of its other indicators. In addition, not every optimal solution synthesized theoretically can be implemented in practice on the basis of the achieved level of technology.

The theory of automatic control uses functionals that characterize individual quality indicators. Therefore, most often, optimal automatic systems are synthesized as optimal according to one main criterion, and the remaining indicators that determine the quality of functioning of the automatic system are limited to the range of acceptable values. This simplifies and makes the task of finding optimal solutions when developing optimal systems more specific.

At the same time, the task of choosing competing system options becomes more complicated, since they are compared according to different criteria, and the evaluation of the system does not have a clear answer. Indeed, without a thorough analysis of many contradictory, often unformalized factors, it is difficult to answer, for example, the question which system is better: more reliable or less expensive?

If the optimality criterion expresses technical and economic losses (automatic system errors, transition time, energy consumption, funds, cost, etc.), then the optimal one will be: control that provides a minimum of the optimality criterion. If it expresses profitability (efficiency, productivity, profit,
missile flight range, etc.), then optimal control should provide the maximum optimality criterion.

The task of determining the optimal automatic system, in particular the synthesis of the optimal parameters of the automatic system when a command input and interference, which are stationary random signals, are received at its input; the root mean square value of the error is taken as an optimality criterion. The conditions for increasing the accuracy of reproduction of the useful signal (specifying influence) and suppressing interference are controversial nature, and therefore the problem arises of choosing such (optimal) system parameters at which the mean square error takes the smallest value.

Synthesis of an optimal system using the mean square optimality criterion is a particular problem. General methods synthesis of optimal systems are based on the calculus of variations. However classical methods calculus of variations for solving modern practical problems that require taking into account restrictions, in many cases turn out to be unsuitable. The most convenient methods for synthesizing optimal automatic control systems are Bellman's dynamic programming method and Pontryagin's maximum principle.

In the general process of designing technical systems, two types of problems can be seen.
1 Design of a control system aimed at achieving the task (formation of trajectories, modes, selection of control methods that implement trajectories, etc.). This range of tasks can be called movement design.
2 Design of structural and strength schemes (selection of geometric, aerodynamic, structural and other parameters) ensuring the implementation general characteristics and specific operating modes. This range of design tasks is associated with the selection of resources necessary to implement the assigned tasks.

Designing movements (changing technological parameters) is closely related to the group of problems of the second type, since the information obtained when designing movements is the initial (largely determining) for solving these problems. But even in cases where there is a ready-made technical system (i.e., the available resources are determined), optimizing techniques can be implemented in the process of its modification.

Problems of the first type are currently being solved most effectively and strictly on the basis of general methods of the mathematical theory of optimal control processes. The significance of the mathematical theory of optimal control processes lies in the fact that it provides a unified methodology for solving a very wide range of optimal design and control problems, eliminates the inertia and lack of generality of previous private methods and contributes to valuable results and methods obtained in related fields.

The theory of optimal processes makes it possible to solve a wide range of practical problems in a fairly general formulation, taking into account most of the technical restrictions imposed on the feasibility of technological processes. The role of methods from the theory of optimal processes has especially increased in recent years due to the widespread introduction of computers into the design process.

Thus, along with the problem of improving various quality indicators of an automatic system, the problem arises of constructing optimal automatic systems in which the extreme value of one or another technical and economic quality indicator is achieved.

The development and implementation of optimal automatic control systems helps to increase the efficiency of use of production units, increase labor productivity, improve product quality, save energy, fuel, raw materials, etc.

Optimal systems are classified By various signs. Let's note some of them.
Depending on the implemented optimality criterion, the following are distinguished:
1) systems that are optimal in performance. They implement the criterion of minimum time of transient processes;
2) systems that are optimal in accuracy. They are formed according to the criterion of the minimum deviation of variables during transient processes or according to the criterion of the minimum mean square error;
3) systems that are optimal in terms of fuel consumption, energy, etc., implementing the criterion of minimum consumption;
4) systems that are optimal under invariance conditions. They are synthesized according to the criterion of independence of output variables from external disturbances or from other variables;
5) optimal extremal systems that determine the criterion for the minimum deviation of the quality indicator from its extreme value.

Depending on the characteristics of objects, optimal systems are divided into:
1) linear systems;
2) nonlinear systems;
3) continuous systems;
4) discrete systems;
5) additive systems;
6) parametric systems.

These signs, except for the last two, do not need explanation. In additive systems, impacts on an object do not change its characteristics. If the influences change the coefficients of the object’s equations, then such systems are called parametric.

Depending on the type of optimality criterion, optimal systems are divided into the following:
1) uniformly optimal, in which each individual process proceeds optimally;
2) statistically optimal, implementing an optimality criterion that is statistical in nature due to random influences on the system. In these systems, the best behavior is achieved not in every single process, but only in some. Statistically optimal systems can be said to be average optimal;
3) minimax optimal, which are synthesized from the condition of a minimax criterion that provides the best worst result compared to a similar worst result in any other automatic system.

Based on the degree of completeness of information about an object, optimal systems are divided into systems with complete and incomplete information. Information about an object includes information:
1) about the relationship between the input and output quantities of the object;
2) about the condition of the object;
3) about the driving influence that determines the required operating mode of the system;
4) about the goal of managing the functional expressing the optimality criterion;
5) about the nature of the disturbance.

Information about an object is in fact always incomplete, but in many cases this does not have a significant impact on the functioning of the system according to the selected optimality criterion. In some cases, the incompleteness of information is so significant that the use of statistical methods is required when solving optimal control problems.

Depending on the completeness of information from the control object, the optimality criterion can be chosen “hard” (with sufficient complete information) or “adapting”, i.e. changing when information changes. Based on this criterion, optimal systems are divided into systems with rigid tuning and adaptive ones. Adaptive systems include extreme, self-adjusting and learning systems. These systems most fully meet modern requirements for optimal control systems.

The solution to the problem of synthesizing an optimal system is to develop a control system that meets the specified requirements, i.e., to create a system that implements the selected optimality criterion. Depending on the amount of information about the structure of the automatic control system, the synthesis problem is posed in one of the following two formulations.

The first formulation covers cases when the structure of the automatic system is known. Such. In cases, the object and the controller can be described by the corresponding transfer functions, and the synthesis problem is reduced to determining the optimal values ​​of the numerical parameters of all elements of the system, i.e., those parameters that ensure the implementation of the selected optimality criterion.

In the second formulation, the synthesis problem is posed with an unknown structure of the system. In this case, it is necessary to determine such a structure and such system parameters that will provide a system that is optimal according to the accepted quality criterion. In engineering practice, the synthesis problem in this formulation is rare. Most often, the control object is either specified as a physical device or described mathematically, and the synthesis problem is reduced to the synthesis of an optimal controller. It should be emphasized that in this case it is necessary systems approach to the synthesis of an optimal control system. The essence of this approach is that when synthesizing a controller, the entire system (controller and object) is considered as a single whole.

At the initial stage of synthesizing an optimal controller, the task comes down to its analytical design, i.e., to determining its mathematical description. In this case, the same mathematical model of the controller can be implemented by different physical devices. The choice of a specific physical implementation of an analytically determined controller is carried out taking into account the operating conditions of a specific automatic control system. Thus, the problem of synthesizing an optimal controller is ambiguous and can be solved in various ways.

When synthesizing an optimal control system, it is very important to create a model of the object that is as adequate as possible to the real object. In control theory, as in other modern fields of science, the main types of object models are mathematical models of the equations of statics and dynamics of objects.

When solving problems of synthesizing an optimal system, a unified mathematical model of control objects is usually a model in the form of equations of state. The state of the automatic control system at each moment in time is understood as the minimum set of variables (state variables) that contains. the amount of information sufficient to determine the coordinates of the system in the current and future states of the system. The initial equations of the object are usually nonlinear. To reduce them to the form of equations of state, methods of linear transformations of the original equations are widely used.

Statement of the main optimal control problems in the form of a time program for an automatic system with an optimality criterion and boundary conditions is formulated as follows.

Among all the program controls u = u(t) and control parameters admissible on the segment that transfer the point (t0, x0) to the point (t1, x1), find those for which the functional on the solutions of the system of equations will take the smallest (largest) value with the fulfillment optimality conditions.

The control u(t) that solves this problem is called the optimal (program) control, and the vector a is called the optimal parameter. If the pair (u*(t), a*) delivers the absolute minimum to the functional I on solutions of the system, then the relation

The main problem of optimal coordinate control is known in the theory of optimal processes as the problem of synthesizing the optimal control law, and in some problems as the problem of the optimal law of behavior.

The problem of synthesizing an optimal control law for a system with a criterion and boundary conditions, where for simplicity it is assumed that the functions f0, f, h, g do not depend on the vector a, is formulated as follows.

Among all admissible control laws v(x, t), find one such that for any initial conditions (t0, x0) when substituting this law, the specified transition is carried out and the quality criterion I[u] takes the smallest (largest) solution.

The trajectory of the automatic system corresponding to the optimal control u*(t) or the optimal law v*(x, t) is called the optimal trajectory. The set of optimal trajectories x*(t) and optimal control u*(t) forms an optimal controlled process (x*(t), u*(t)).

Since the optimal control law v*(x, t) has the form of a feedback control law, it remains optimal for any values ​​of the initial conditions (t0, x0) and any coordinates x. In contrast to the law v*(x, t), the program optimal control u*(t) is optimal only for those initial conditions for which it was calculated. When the initial conditions change, the function u*(t) will also change. This is an important difference, from the point of view of the practical implementation of an automatic control system, between the optimal control law v*(x, t) and the program optimal control u*(t), since the choice of initial conditions in practice can never be made absolutely accurately.

Every part of the optimal trajectory (optimal control) is also, in turn, an optimal trajectory (optimal control). This property is mathematically formulated as follows.

Let u*(t), t0< t < t1, – оптимальное управление для выбранного функционала I[u], соответствующее переходу из состояния (t0, x0) в состояние (t1, x1) по оптимальной траектории x*(t). Числа (t0, t1) и вектор x0 – фиксированные, а вектор x1 , вообще говоря, свободен. На оптимальной траектории x*(t) выбираются точки x*(t0) и x*(t1), соответствующие моментам времени t = t0, t = t1. Тогда управление u*(t) на отрезке является оптимальным, соответствующим переходу из состояния x*(t0) в состояние x*(t1), а дуга является оптимальной траекторией

Thus, if the initial state of the system is x*(t0) and starting moment time t = t0, then regardless of how the system arrived at this state, its optimal subsequent movement will be the trajectory arc x*(t), t0< t < t1, являющейся частью оптимальной траектории между точками(t0, x0) и (t1, x1). Это условие является необходимым и достаточным свойством оптимальности процесса и служит основой динамического программирования.

Mathematical description the task of transferring a control object (process) from one state to another is characterized by n phase coordinates x1, x2, x3, . . . xn. In this case, r control actions u1, u2, u3, can be applied to the automatic control object. . . ug.

Control actions u1(t), u2(t), u3(t), . . . It is convenient to consider uг(t) as the coordinates of a certain vector u = (u1, u2, u3, ... uг), called the control action vector. Phase coordinates (state variables) of the control object x1, x2, x3, . . . xn can also be considered as the coordinates of some vector or point with coordinates x = (x1, x2, x3, ... xn) in n-dimensional state space. This point is called the phase state of the object, and the n-dimensional space in which phase states are depicted in the form of points is called the phase space (state space) of the object under consideration. Using vector images the controlled object can be depicted as shown in the figure. Under the influence of the control action u (u1, u2, u3, ... uг), the phase point x (x1, x2, x3, ... xn) moves, describing a certain line in the phase space, called the phase trajectory of the considered movement of the control object.

Knowing the control action u(t) = u1(t), u2(t), u3(t), . . . uг(t), in the presence of disturbances, it is possible to unambiguously determine the movement of the control object at t > t0, if its initial state at t = t0 is known. If you change the control u(t), then the point will move along a different trajectory, i.e. for different controls we get different trajectories emanating from the same point. Therefore, the transition of an object from the initial phase state H to the final state xK can be carried out along different phase trajectories depending on the control. Among the many trajectories, there is the best in a certain sense, that is, the optimal trajectory. For example, if the task is to minimize fuel consumption during the locomotive's movement interval, then the choice of control and the corresponding trajectory should be approached from this point of view. Specific fuel consumption g depends on the developed thrust force of the control action u(t), i.e. g (t). The optimality criterion is usually presented in the form of some functional.

The problem of synthesizing optimal automatic systems was strictly formulated relatively recently, when the definition of the concept of optimality criterion was given. Depending on the control goal, various technical or economic indicators of the controlled process can be selected as an optimality criterion. In optimal systems, it is ensured not just a slight increase in one or another technical and economic quality indicator, but the achievement of its minimum or maximum possible value.

An important step in formulating and solving a general control problem is the choice of optimality criterion. This choice is an informal act; it cannot be prescribed by any theory, but is entirely determined by the content of the task. In some cases, the formal expression of the understanding of the optimality of a system allows for several equivalent (or almost equivalent) formulations.

If the optimality criterion expresses technical and economic losses (system errors, transition process time, energy consumption, money, cost, etc.), then the optimal control will be the following: control that provides a minimum of the optimality criterion. If it expresses profitability (efficiency, productivity, profit, missile range, etc.), then optimal control should provide the maximum optimality criterion.

In such cases, the success and simplicity of the resulting solution is largely determined by the chosen form of the optimality criterion (provided that in all cases it sufficiently fully defines the requirements of the problem for the system). After constructing a mathematical model of the control process, its further research and optimization is carried out using mathematical methods. The optimal behavior or state of the automatic system is ensured when the functional reaches its extremum I = extg maximum or minimum, depending on physical meaning variables.

In the practice of developing and researching dynamic systems, two tasks are most often encountered:
1) synthesis of a system that is optimal in terms of performance;
2) synthesis of a system that is optimal in accuracy.

In the first case, it is necessary to ensure a minimum time of the transient process, in the second, a minimum of the root-mean-square error (deviation of the coordinate Dyi (t) from the specified value) under given or random influences.

In this case, a functional can be defined as a function whose arguments are related to optimality criteria and are themselves functions of variables. The total fuel consumption that interests us, the main indicator in this case of the quality of locomotive motion control systems, is determined by the integral functional.

The integral functional characterizing the main indicator of the quality of the automatic system (in the example under consideration, fuel consumption) is called the optimality criterion. Each control u(t), and therefore the trajectory of the locomotive, has its own numerical value of the optimality criterion. The problem arises of choosing such control u(t) and motion trajectory x(t), at which the minimum value of the optimality criterion is achieved.

Optimality criteria are usually used, the value of which is determined not current state object (in the example under consideration, specific fuel consumption), but by changing it during the entire control process. Therefore, to determine the optimality criterion, it is necessary, as in the given example, to integrate some function, the value of which in the general case depends on the current values ​​of the phase coordinates x of the object and the control u, influence, i.e. such an optimality criterion is an integral functional of the form

In cases where the phase coordinates of an object represent stationary random functions, the optimality criterion is an integral functional not in the time domain, but in the frequency domain. Such optimality criteria are used when solving the problem of optimizing systems to minimize the error variance. In the simplest cases, the optimality criterion may not be an integral functional, but simply a function.

The theory of automatic control uses the so-called minimax optimality criteria that characterize the conditions best work systems under the worst possible conditions. An example of using a minimax criterion could be the selection, based on it, of a variant of an automatic control system that has a minimum value of maximum overshoot. Any optimality criterion is implemented in the presence of restrictions imposed on variables and indicators of management quality. In automatic control systems, restrictions imposed on control coordinates can be divided into natural and conditional.

In many cases, conflicting requirements are imposed on the automatic system (for example, requirements for minimum fuel consumption and maximum train speed). When choosing a control that meets one requirement (the criterion of minimum fuel consumption), other requirements (maximum speed) will not be satisfied. Therefore, of all the selection requirements, one is the main one, which must be satisfied in the best way, and other requirements are taken into account in the form of restrictions on their values. For example, when meeting the minimum fuel consumption requirement, the minimum travel speed is limited. If there are several equal quality indicators that cannot be combined into a common combined indicator, the selection of optimal controls corresponding to these indicators separately while limiting the rest provides solution options that can (during design) help in choosing the optimal compromise option.

When choosing a control action u, it should be borne in mind that it cannot take arbitrary values, since real restrictions are imposed on it, determined by technical specifications. For example, the value of the control voltage supplied to the electric motor is limited by its limit value, determined by the operating conditions of the electric motor.

Optimal control can be achieved if the object is controllable, i.e., there is at least one admissible control that transfers the object from the initial state to the specified final state. The requirement to minimize the optimality criterion can be formally replaced by the requirement to minimize the final value of one of the coordinates of the control object.

If the boundary conditions in an optimal control problem are specified by the initial and final points of the trajectory, then we have a problem with fixed ends. In the case where one or both boundary conditions are specified not by a point, but by a finite region, or not specified at all. then we have a problem with free ends or one free end. An example of a problem with one free end is the problem of eliminating a deviation in an automatic control system caused by an abrupt change in a reference or disturbing influence.

An important special case of optimal control is the problem of optimal performance. Among all admissible controls u(t), under the influence of which the control object transitions from the initial phase state xH to ​​the given final state xK, find one for which this transition is carried out in the shortest time.

The theory of optimal processes is the basis of a unified methodology for designing optimal movements, technical, economic and information systems. As a result of applying the methods of the theory of optimal processes to the problems of designing various systems, the following can be obtained:
1) optimal time programs for changing control actions according to one or another criterion and optimal values constant control (design, tuning) parameters, taking into account various kinds of restrictions on their values;
2) optimal trajectories, modes, taking into account restrictions on the area of ​​their location;
3) optimal control laws in the form of feedback that determine the structure of the control system loop (solution to the control synthesis problem);
4) limit values ​​for a number of characteristics or other quality criteria, which can then be used as a standard for comparison with other systems;
5) solving boundary value problems of getting from one point of phase space to another, in particular, the problem of getting into a given area;
6) optimal strategies for getting into a certain moving area.

Methods for solving optimal control problems are mainly reduced to the direct search method by repeatedly finding the process while varying the control action.

The complexity of the problems of optimal control theory required a broader mathematical base for its construction. This theory uses the calculus of variations, the theory of differential equations, and matrix theories. The development of optimal control on this basis led to a revision of many sections of the theory of automatic control, and therefore the theory of optimal control is sometimes called modern control theory. Although this is an exaggeration of the role of only one of the sections, the development of the theory of automatic control is determined by last decades largely the development of this section.

To date, a mathematical theory of optimal control has been constructed. On its basis, methods for constructing systems that are optimal in terms of speed and procedures for the analytical design of optimal regulators have been developed. Analytical design of controllers together with the theory of optimal observers (optimal filters) form a set of methods that are widely used in the design of modern complex control systems.

The initial information for solving optimal control problems is contained in the problem statement. The management task can be formulated in meaningful (informal) terms, which are often somewhat vague. To apply mathematical methods, a clear and rigorous formulation of problems is required, which would eliminate possible uncertainties and ambiguities and at the same time make the problem mathematically correct. For this purpose, the general problem requires an adequate mathematical formulation, called a mathematical model of the optimization problem.

A mathematical model is a fairly complete mathematical description of a dynamic system and control process within the chosen degree of approximation and detail. A mathematical model maps the original problem into a certain mathematical scheme, and ultimately into a certain system of numbers. It, on the one hand, clearly indicates (lists) all the information without which it is impossible to begin an analytical or numerical study of the problem, and on the other hand, those additional information that follow from the essence of the problem and which reflect a certain requirement for its characteristics.

A complete mathematical model of the general control optimization problem consists of a number of partial models:
controlled movement process;
available resources and technical limitations;
management process quality indicator;
control influences.

Thus, a mathematical model of a general control problem is characterized by a set of certain mathematical relationships between its elements (differential equations, constraints such as equalities and inequalities, quality functions, initial and boundary conditions, etc.). In the theory of optimal control, general conditions are established that the elements of the mathematical model must satisfy in order for the corresponding mathematical optimization problem to be:
clearly defined
would make sense, that is, it would not contain conditions leading to the absence of a solution.

Note that the formulation of the problems and its mathematical model do not remain unchanged during the research process, but interact with each other. Typically, the initial formulation and its mathematical model undergo significant changes at the end of the study. Thus, the construction of an adequate mathematical model resembles an iterative process, during which both the formulation of the general problem itself and the formulation of the mathematical model are clarified. It is important to emphasize that for the same problem the mathematical model may not be unique ( different systems coordinates, etc.). Therefore, it is necessary to search for a variant of the mathematical model for which the solution and analysis of the problem would be the simplest.

Synthesis of an optimal system using the mean square optimality criterion is a particular problem. General methods for synthesizing optimal systems are based on the calculus of variations. However, classical methods of the calculus of variations for solving modern practical problems that require taking into account restrictions, in many cases, turn out to be unsuitable. The most convenient methods for synthesizing optimal automatic control systems are Bellman's dynamic programming method and Pontryagin's maximum principle.

The following mathematical methods are widely used in optimal control theory:
- dynamic programming;
- maximum principle;
- calculus of variations;
- mathematical programming.

Each of the listed methods has its own characteristics and, therefore, its own area of ​​application.

The dynamic programming method has great potential. However, for high order systems (above the fourth) the use of the method is very difficult. With several control variables, the implementation of the dynamic programming method on a computer requires amounts of memory, sometimes exceeding the capabilities of modern machines.

The maximum principle makes it relatively easy to take into account restrictions on control actions applied to the control object. The method is most effective in synthesizing systems that are optimal in performance. However, the implementation of the method even using a computer is significantly difficult.

The calculus of variations is used in the absence of restrictions on state variables and control variables. Obtaining a numerical solution based on calculus of variations methods is difficult. The method is used, as a rule, for some very simple cases.

Mathematical programming methods (linear, nonlinear, etc.) are widely used to solve optimal control problems in both automatic and automated systems. The general idea of ​​the methods is to find the extremum of a function in the space of many variables under restrictions in the form of a system of equalities and inequalities. The methods make it possible to find numerical solutions to a wide range of optimal control problems. The advantages of mathematical programming methods are the ability to relatively easily take into account restrictions on controls and state variables, as well as generally acceptable memory requirements.

The Bellman dynamic programming method is based on solving variational problems according to the principle - the section of the optimal trajectory from any intermediate point to the end point is also the optimal trajectory between these points.

The essence of the dynamic programming method will be explained in following example. Let's say we need to move some object from the starting point to the ending point. To do this, you need to take n steps, each of which has several possible options. However, from the set of possible options at each step, the one with the extreme value of the functional is selected. This procedure is repeated at each optimization step. Ultimately, we obtain the optimal trajectory of transition from the initial state to the final state, subject to the optimization conditions.

Let, for example, you need to select the operating mode of a locomotive passing through given points, in which a minimum of fuel consumption or travel time is achieved. The optimal solution can be found by searching through possible options on a computer, but when large values n and l, which is the case when solving most real problems, this would require an extremely large amount of calculations. Solving this problem is simplified by using the dynamic programming method.

To formulate the dynamic programming problem mathematically, we assume that the steps in solving the problem represent fixed time intervals, i.e., time quantization occurs. It is required to find, taking into account a number of restrictions, the control law u [n], which transfers the object from point t [o] of the phase space to point t [n], provided that the minimum optimality criterion is ensured

Thanks to this simplification using the dynamic programming method, it becomes possible solution optimal control problems that cannot be solved by direct optimization of the original functional using classical methods of the calculus of variations. The dynamic programming method is essentially a method of creating a program for numerically solving a problem on digital computers. Only in the simplest cases this method allows you to obtain an analytical expression of the desired solution and carry out its research. Using the dynamic programming method, it is possible to solve not only optimal control problems, but also multi-step optimization problems from a wide variety of fields of technology.

The method is widely used to study optimal control in both dynamic (technical) and economic systems. To implement the dynamic programming method, connections in the system between output variables, controls and optimality criteria can be specified both in the form of analytical dependencies and in the form of tables of numerical data, experimental graphs, etc.

Pontryagin's maximum principle can be explained using the example of the maximum speed problem. Let it be required to transfer the representing point from the initial position of the phase space to the final position in a minimum time. For each point in phase space, there is an optimal phase trajectory and a corresponding minimum transition time to the final point. Around this point you can construct surface isochrones, which are the geometric locus of points with the same minimum time of transition to this point. The optimal trajectory from the starting point to the ending point should ideally coincide with the normals to the isochrones (time is spent moving along the isochrones without reducing the time interval until the end point is reached). In practice, the restrictions imposed on the coordinates of the object do not always allow implementation ideal, optimal in terms of speed, trajectory. Therefore, the optimal trajectory will be the one that is as close as possible, as far as restrictions allow, to the normals to the isochrones. This condition mathematically means that throughout the entire trajectory scalar product the vector of the speed of movement of the depicting point to the vector opposite (in the direction) to the gradient of the time of transition to the end point should be maximum:

where fi, Vi are the coordinates of the corresponding vectors.

Since the scalar product of two vectors is equal to the product of their absolute values ​​and the cosine between them, the optimality condition is the maximum projection of the velocity vector V onto the direction f. This optimality condition is Pontryagin’s maximum principle.

Thus, when using the maximum principle, the variational problem of finding a function u that extremizes the functional H is replaced by a simpler problem of determining the control u that delivers the maximum of the auxiliary Hamilton function. Hence the name of the method, the maximum principle.

The main difficulty in applying the maximum principle is that the initial values ​​f (0) of the auxiliary function f are not known. Usually, they are given arbitrary initial values ​​f (0), solve the object equations and the adjoint equations together and obtain the optimal trajectory, which, as a rule, passes the specified endpoint. Using the method of successive approximations, by specifying different initial values ​​of f (0), the optimal trajectory passing through the given end point is found.

The maximum principle is a necessary and sufficient condition only for linear objects. For nonlinear objects, it seems to be only a necessary condition. In this case, with its help, a narrowed group of admissible controls is found, among which, for example, by enumeration, the optimal control is found, if it exists at all.

Mathematical programming. Strictly linear models, which used proportionality, linearity and additivity, are far from adequate for many real-life situations. In reality, dependencies such as total costs, output, etc., on the production plan are non-linear.

Often the application of linear programming models in nonlinear conditions is successful. Therefore, it is necessary to determine in which cases the linearized version of the problem is an adequate representation of a nonlinear phenomenon.

The method of mathematical programming consists of finding the extremum of a function of many variables under known restrictions in the form of a system of equalities and inequalities. The advantages of the mathematical programming method include:
complex restrictions on state and control variables are taken into account quite simply;
The amount of computer memory can be significantly less with other research methods.

If information is available regarding the permissible range of values ​​of the variables in the optimal solution, then, as a rule, it is possible to construct appropriate restrictions and obtain a fairly reliable linear approximation. In cases where there is a wide range of feasible solutions and there is no information about the nature of the optimal solution, it is impossible to construct a sufficiently good linear approximation. The importance of nonlinear programming and its use is constantly increasing.

Often, nonlinearities in models are driven by empirical observations of relationships, such as disproportionate changes in costs, output, quality indicators, or structures, but the resulting relationships include the postulated physical phenomena, as well as mathematically derived or management-established rules of behavior.

Many different circumstances lead to nonlinear formulation of constraints or objective functions. If the number of nonlinearities is small, or if the nonlinearities are not significant, the increase in computational effort may be negligible.

It is always necessary to analyze the dimension and complexity of the model and evaluate the impact of linearization on the decision being made. A two-step approach to solving problems is often used: they build a small-dimensional nonlinear model, find the region containing its optimal solution, and then use a more detailed linear programming model of higher dimension, the approximation of the parameters of which is based on the obtained solution of the nonlinear model.

To solve problems described by nonlinear models, there is no such universal method solutions as a simplex method for solving linear programming problems. A nonlinear programming method may be very effective for solving problems of one type and completely unacceptable for solving other problems.

Most nonlinear programming methods do not always ensure convergence in a finite number of iterations. Some methods provide a monotonic improvement in the value of the objective function when moving from one iteration to another.

The problem of optimal performance is always relevant. Reducing the time of transient processes of tracking systems makes it possible to work out setting influences in a shorter period of time. Reducing the duration of transient processes in control systems for technical objects, robots, and technological processes leads to increased labor productivity.

In linear automatic control systems, increased speed can be achieved using corrective devices. For example, reducing the influence of the time constant of an aperiodic link with transfer function k/(Tp + 1) on the transient process is possible by including a series differentiating device with transfer function k1 (T1p + 1)/(T2p + 1). Effective methods for increasing the performance of servo systems are methods for suppressing the initial values ​​of slowly decaying components of the system's transient process and minimizing quadratic integral estimates using connections based on the reference action. However, the effect of improving the transient process in real systems depends on the degree of limitation of the coordinates (nonlinearities) of the system. Derivatives from external influences, usually significant in magnitude and short-term in duration, are limited to the elements of the system and do not cause the desired effect of forcing in the transient mode. The best results when solving the problem of increasing the performance of automatic systems in the presence of restrictions are obtained by control that is optimal in performance.

The problem of optimal performance was the first problem in the theory of optimal control. She played big role in the discovery of one of the main methods of the theory of optimal control - the maximum principle. This problem, being a special case of the optimal control problem, consists in determining such an admissible control action, under the influence of which the controlled object (process) moves from the initial phase state to the final one in the minimum time. The optimality criterion in this problem is time.

Necessary conditions for optimal control For various types optimization problems are obtained based on the use of analytical indirect optimization methods and form a set of functional relationships that must be satisfied by the extremal solution.

When deriving them, an essential assumption for subsequent application was made about the existence of optimal control (optimal solution). In other words, if an optimal solution exists, then it necessarily satisfies the given (necessary) conditions. However, the same necessary conditions Other solutions that are not optimal may also be satisfied (just as the necessary condition for the minimum of a function of one variable is satisfied, for example, by the maximum points and inflection points of the main function). Therefore, if the found solution satisfies the necessary optimality conditions, this does not mean that it is optimal.

Using only the necessary conditions makes it possible, in principle, to find all solutions that satisfy them, and then select among them those that are truly optimal. However, in practice, it is most often not possible to find all solutions that satisfy the necessary conditions due to the high complexity of such a process. Therefore, after any solution has been found that satisfies the necessary conditions, it is advisable to check whether it is truly optimal in the sense of the original formulation of the problem.

Analytical conditions, the feasibility of which on the obtained solution guarantees its optimality, are called sufficient conditions. The formulation of these conditions and especially their practical (for example, computational) verification often turns out to be a very labor-intensive task.

In the general case, the application of the necessary optimality conditions would be more justified if for the problem under consideration it was possible to establish the fact of the existence or existence and uniqueness of optimal control. This question is mathematically very complex.

The existence problem, the uniqueness of optimal control, consists of two questions.
1 The existence of an admissible control (i.e., a control belonging to a given class of functions) that satisfies the given constraints and transfers the system from a given initial state to a given final state. Sometimes the boundary conditions of a problem are chosen in such a way that the system, due to the limited nature of its energy (financial, information) resources, is unable to satisfy them. In this case, there is no solution to the optimization problem.
2 Existence of optimal control in the class of admissible controls and its uniqueness.

These questions in the case of nonlinear systems general view have not yet been solved with sufficient completeness for applications. The problem is also complicated by the fact that the uniqueness of the optimal control does not imply the uniqueness of the control that satisfies the necessary conditions. In addition, usually one, the most important necessary condition is satisfied (most often the maximum principle).

Checking further necessary conditions can be quite cumbersome. This shows the importance of any information about the uniqueness of controls that satisfy the necessary optimality conditions, as well as about the specific properties of such controls.

It is necessary to caution against drawing conclusions about the existence of optimal control based on the fact that a “physical” problem is being solved. In fact, when applying methods of optimal control theory, one has to deal with a mathematical model. A necessary condition for the adequacy of the description of a physical process by a mathematical model is precisely the existence of a solution for the mathematical model. Since during the formation of a mathematical model various kinds of simplifications are introduced, the impact of which on the existence of solutions is difficult to predict, the proof of existence is a separate mathematical problem.

Thus:
the existence of optimal control implies the existence of at least one control that satisfies the necessary optimality conditions; the existence of a control that satisfies the necessary optimality conditions does not imply the existence of an optimal control;
from the existence of optimal control and the uniqueness of control that satisfies the necessary conditions, the uniqueness of optimal control follows; the existence and uniqueness of optimal control does not imply the uniqueness of control that satisfies the necessary optimality conditions.

It is rational to apply management optimization methods:
1) in complex technical and economic systems, where finding acceptable solutions based on experience is difficult. Experience shows that optimization of small subsystems can lead to large losses in the quality criteria of the integrated system. It is better to approximately solve the problem of optimizing the system as a whole (even in a simplified formulation) than to solve it exactly for a separate subsystem;
2) in new tasks in which there is no experience in forming satisfactory characteristics of the management process. In such cases, the formulation of the optimal problem often allows us to establish the qualitative nature of control;
3) at the earliest possible stage of design, when there is greater freedom of choice. After defining a large number of design solutions, the system becomes insufficiently flexible and subsequent optimization may not provide significant gains.

If necessary, determine the direction of change in control and parameters that give biggest change quality criterion (definition of quality gradient). It should be noted that for well-studied systems that have been in operation for a long time, optimization methods can provide a small gain, since practical solutions found from experience usually approach the optimal ones.

In some practical problems, a certain “roughness” of optimal controls and parameters is observed, i.e., small changes in the quality criterion correspond to large local changes in controls and parameters. This sometimes gives rise to the assertion that in practice gentle and strict optimization methods are always not needed.

In fact, “roughness” of control is observed only in cases where optimal control corresponds to a stationary point of the quality criterion. In this case, a change in control by the amount leads to a deviation of the quality criterion by the amount of error.

In the case of controls lying on the boundary of the admissible region, the indicated roughness may not occur. This property must be studied specifically for each problem. In addition, in some problems, even small improvements in quality criteria achieved through optimization can be significant. Complex control optimization problems often place excessive demands on the characteristics of the computers used in the solution.

Definition and necessity of building optimal automatic control systems

Automatic control systems are usually designed based on the requirements to ensure certain quality indicators. In many cases, the necessary increase in dynamic accuracy and improvement of transient processes of automatic control systems is achieved with the help of corrective devices.

Particularly broad opportunities for improving quality indicators are provided by the introduction into the ACS of open-loop compensation channels and differential connections, synthesized from one or another condition of error invariance with respect to the master or disturbing influences. However, the effect of correction devices, open compensation channels and equivalent differential connections on the quality indicators of the ACS depends on the level of signal limitation by nonlinear elements of the system. The output signals of differentiating devices, usually short in duration and significant in amplitude, are limited to the elements of the system and do not lead to an improvement in the quality indicators of the system, in particular its speed. The best results in solving the problem of increasing the quality indicators of an automatic control system in the presence of signal limitations are obtained by the so-called optimal control.

The problem of synthesizing optimal systems was strictly formulated relatively recently, when the concept of an optimality criterion was defined. Depending on the control goal, various technical or economic indicators of the controlled process can be selected as an optimality criterion. In optimal systems, it is ensured not just a slight increase in one or another technical and economic quality indicator, but the achievement of its minimum or maximum possible value.

If the optimality criterion expresses technical and economic losses (system errors, transition process time, energy consumption, funds, cost, etc.), then the optimal control will be the one that provides the minimum optimality criterion. If it expresses profitability (efficiency, productivity, profit, missile range, etc.), then optimal control should provide the maximum optimality criterion.

The problem of determining the optimal automatic control system, in particular the synthesis of optimal parameters of the system when a master is received at its input

influence and interference, which are stationary random signals, were considered in Chap. 7. Let us recall that in this case, the root mean square error (RMS) is taken as the optimality criterion. The conditions for increasing the accuracy of reproduction of the useful signal (specifying influence) and suppressing interference are contradictory, and therefore the task arises of choosing such (optimal) system parameters at which the standard deviation takes the smallest value.

Synthesis of an optimal system using the mean square optimality criterion is a particular problem. General methods for synthesizing optimal systems are based on the calculus of variations. However, classical methods of the calculus of variations for solving modern practical problems that require taking into account restrictions, in many cases, turn out to be unsuitable. The most convenient methods for synthesizing optimal automatic control systems are Bellman's dynamic programming method and Pontryagin's maximum principle.

Thus, along with the problem of improving various quality indicators of automatic control systems, the problem arises of constructing optimal systems in which the extreme value of one or another technical and economic quality indicator is achieved.

The development and implementation of optimal automatic control systems helps to increase the efficiency of use of production units, increase labor productivity, improve product quality, save energy, fuel, raw materials, etc.

Concepts about the phase state and phase trajectory of an object

In technology, the task of transferring a controlled object (process) from one state to another often arises. For example, when designating targets, it is necessary to rotate the radar station antenna from the initial position with initial azimuth to a given position with azimuth. To do this, control voltage is supplied to the electric motor connected to the antenna through a gearbox. At each moment of time, the state of the antenna is characterized by the current value of the rotation angle and angular velocity. These two quantities change depending on the control voltage and. Thus, there are three interconnected parameters and (Fig. 11.1).

The quantities characterizing the state of the antenna are called phase coordinates, and - control action. When target designating a radar such as a gun guidance station, the task arises of rotating the antenna in azimuth and elevation. In this case, we will have four phase coordinates of the object and two control actions. For a flying aircraft, we can consider six phase coordinates (three spatial coordinates and three velocity components) and several control actions (engine thrust, quantities characterizing the position of the rudders

Rice. 11.1. Diagram of an object with one control action and two phase coordinates.

Rice. 11.2. Diagram of the object with control actions and phase coordinates.

Rice. 11.3. Diagram of the object with a vector image of the control action and the phase state of the object

altitude and direction, ailerons). In the general case, at each moment of time, the state of an object is characterized by phase coordinates, and control actions can be applied to the object (Fig. 11.2).

The transfer of a controlled object (process) from one state to another should be understood not only as mechanical movement (for example, a radar antenna, aircraft), but also as the required change in various physical quantities: temperature, pressure, cabin humidity, chemical composition of a particular raw material with the appropriate controlled technological process.

It is convenient to consider control actions as the coordinates of a certain vector called the control action vector. The phase coordinates (state variables) of an object can also be considered as the coordinates of a certain vector or point in -dimensional space with coordinates. This point is called the phase state (state vector) of the object, and the -dimensional space in which phase states are depicted as points is called phase space (state space) of the object under consideration. When using vector images, the controlled object can be depicted as shown in Fig. 11.3, where and is the vector of the control action and represents a point in phase space that characterizes the phase state of the object. Under the influence of the control action, the phase point moves, describing a certain line in phase space, called the phase trajectory of the considered movement of the object.

OPTIMAL AUTOMATIC CONTROL SYSTEMS

Statement of the control optimization problem

In general, an automatic system consists of a control object and a set of devices that provide control of this object. As a rule, this set of devices includes measuring devices, amplifying and converting devices, as well as actuators. If we combine these devices into one link (control device), then the block diagram of the system looks like this:

In an automatic system, information about the state of the control object is supplied to the input of the control device through a measuring device. Such systems are called feedback systems or closed systems. The absence of this information in the control algorithm indicates that the system is open. The state of the control object at any time will be described by variables , which are called system coordinates or state variables. It is convenient to consider them as coordinates of the dimensional state vector.

The measuring device provides information about the state of the object. If, based on the measurement of the vector, the values ​​of all coordinates of the state vector can be found, then the system is said to be completely observable.

The control device generates a control action. There can be several such control actions; they form a dimensional control vector.

A master input action is received at the input of the control device. This input action carries information about what the state of the object should be. The control object may be subject to a disturbing influence, which represents a load or interference. Measuring the coordinates of an object, as a rule, is carried out with some errors, which are also random.

The task of the control device is to develop such a control action that the quality of functioning of the automatic system as a whole would be the best in a certain sense.

We will consider control objects that are manageable. That is, the state vector can be changed as required by correspondingly changing the control vector. We will assume that the object is completely observable.

For example, the position of an aircraft is characterized by six state coordinates. These are the coordinates of the center of mass, the Euler angles that determine the orientation of the aircraft relative to the center of mass. The aircraft's attitude can be changed using elevators, heading, aileron and thrust vectoring. Thus the control vector is defined as follows:

Elevator deflection angle

The state vector in this case is defined as follows:

You can pose the problem of selecting a control with the help of which the aircraft is transferred from a given initial state to a given final state with minimal fuel consumption or in minimal time.

Additional complexity in solving technical problems arises due to the fact that, as a rule, various restrictions are imposed on the control action and on the state coordinates of the control object.

There are restrictions on any angle of the elevators, yaws, and ailerons:

Traction itself is limited.

The state coordinates of the control object and their derivatives are also subject to restrictions that are associated with permissible overloads.

We will consider control objects that are described differential equation:

(1)

Or in vector form:

Dimensional vector of object state

Dimensional vector of control actions

Function of the right side of equation (1)

A restriction is imposed on the control vector; we will assume that its values ​​belong to some closed region of some -dimensional space. This means that the control function at any time belongs to the area ().

So, for example, if the coordinates of the control function satisfy the inequalities:

then the region is a -dimensional cube.

Let us call an admissible control any piecewise continuous function whose values ​​at each moment of time belong to the domain , and which can have discontinuities of the first kind. It turns out that even in some optimal control problems the solution can be obtained in the class of piecewise continuous control. In order to select control as a function of time and the initial state of the system, which uniquely determines the movement of the control object, it is required that the system of equations (1) satisfy the conditions of the theorem of existence and uniqueness of the solution in the region. In this area, possible trajectories of the object's movement and possible control functions are located. If the domain of variation of variables is convex, then for the existence and uniqueness of a solution it is sufficient that the function . were continuous in all arguments and had continuous partial derivatives with respect to variables .

As a criterion that characterizes the quality of system operation, a functional of the form is selected:

(2)

As a function, we will assume that it is continuous in all its arguments and has continuous partial derivatives with respect to .

Optimization criteria

Depending on the type of integrand of the functional:

(1)

Various criteria can be obtained by the applied automated system being designed.

Optimal system

an automatic control system that ensures the best (optimal) functioning of the controlled object from a certain point of view. Its characteristics and external disturbing influences can change in an unforeseen way, but, as a rule, under certain restrictions. The best functioning of the control system is characterized by the so-called. optimal control criterion (optimality criterion, objective function), which is a value that determines the effectiveness of achieving the control goal and depends on changes in time or space of coordinates and system parameters. The optimality criterion can be various technical and economic indicators of the operation of an object: efficiency, speed, average or maximum deviation of system parameters from specified values, production cost, individual indicators of product quality or a general indicator of quality, etc. The optimality criterion can relate to both a transitional and a steady-state process, or both. There are regular and statistical optimality criteria. The first depends on regular parameters and on the coordinates of the controlled and control systems. The second is used when the input signals are random functions and/or it is necessary to take into account random disturbances generated by individual elements of the system. According to the mathematical description, the optimality criterion can be either a function of a finite number of parameters and coordinates of the controlled process, which takes an extreme value for optimal functioning of the system, or a functional of a function describing the control law; in this case, the form of this function is determined in which the functional takes an extreme value. To calculate O. s. use Pontryagin's maximum principle or the theory of dynamic programming.

M. M. Maisel.


Big Soviet encyclopedia. - M.: Soviet Encyclopedia. 1969-1978 .

See what “Optimal system” is in other dictionaries:

    optimal system- optimalioji sistema statusas T sritis automatika atitikmenys: engl. optimal system vok. optimales System, n rus. optimal system, f pranc. système optimal, m … Automatikos terminų žodynas

    - (from Latin optimus best) a system for which a criterion for the quality of work chosen in a certain way (rarely several criteria) is optimal. Such criteria can be, for example, speed, minimum costs, accuracy, etc. or... ... Big Encyclopedic Polytechnic Dictionary

    An optimal system is understood as the best system in a certain sense. In order to find the best (optimal) among the possible options of the system, some criterion is needed that characterizes the effectiveness of achieving the management goal.... ... Wikipedia

    In manned spacecraft flights, a group of devices that allow a person to survive in space and support the life of the ship's crew. Contents 1 general information... Wikipedia

    - (optimum currency area) The territory most suitable for using a single currency. Let's assume there are two separate currency areas (countries). Let's consider the positive and negative consequences of their combination. Undoubtedly,… … Economic dictionary

    optimal frequency- The frequency at which the best results are achieved when testing products of a certain type (for example, maximum sensitivity, highest signal-to-noise ratio, etc.). Unit of measurement kHz, MHz [Non-destructive testing system. Kinds… … Technical Translator's Guide

    OPTIMUM WORKING CONDITIONS ZONE- working conditions under which the most favorable course of a person’s psychological functions is observed, ensuring the highest efficiency and reliability of his activities. E, P. Ilyin identifies the following signs of O. z. u. v. 1)… … Encyclopedic Dictionary of Psychology and Pedagogy

    optimal regulator setting- The ratio of control coefficients at which the automatic control system has the largest margin of stability with fairly good indicators of control quality... Polytechnic terminological explanatory dictionary

    A learning machine, a self-adapting system, the control algorithm of which changes in accordance with the assessment of control results so that over time it improves its characteristics and quality of operation (see... ... Great Soviet Encyclopedia

    Information... Wikipedia

Books

  • Baby clothes. Construction. Cutting system "M. Muller and Son", Kostenko S. (ed.). For little ones, but with adult quality - this is what good children's clothing should be. The demand for clothes for children is constantly growing, and now the range of children’s clothing is comparable to…
Did you like the article? Share with your friends!