fbpx
Wikipedia

Control theory

Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.

To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics.

Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system.

Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell.[1] Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky.[2] Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research.[3]

History

Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors.[4] A centrifugal governor was already used to regulate the velocity of windmills.[5] Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems.[6] Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem.[7][8]

A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds.

By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft.[9][10] Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics.

Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.

The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant.

Open-loop and closed-loop (feedback) control

 
A block diagram of a negative feedback control system using a feedback loop to control the process variable by comparing it with a desired value, and applying the difference as an error signal to generate a control output to reduce or eliminate the error.
 
Example of a single industrial control loop; showing continuously modulated control of process flow.

Fundamentally, there are two types of control loops: open loop control and closed loop (feedback) control.

In open loop control, the control action from the controller is independent of the "process output" (or "controlled process variable" - PV). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the timed switching on/off of the boiler, the process variable is the building temperature, but neither is linked.

In closed loop control, the control action from the controller is dependent on feedback from the process in the form of the value of the process variable (PV). In the case of the boiler analogy, a closed loop would include a thermostat to compare the building temperature (PV) with the temperature set on the thermostat (the set point - SP). This generates a controller output to maintain the building at the desired temperature by switching the boiler on and off. A closed loop controller, therefore, has a feedback loop which ensures the controller exerts a control action to manipulate the process variable to be the same as the "Reference input" or "set point". For this reason, closed loop controllers are also called feedback controllers.[11]

The definition of a closed loop control system according to the British Standard Institution is "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."[12]

Likewise; "A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control."[13]

Other examples

An example of a control system is a car's cruise control, which is a device designed to maintain vehicle speed at a constant desired or reference speed provided by the driver. The controller is the cruise control, the plant is the car, and the system is the car and the cruise control. The system output is the car's speed, and the control itself is the engine's throttle position which determines how much power the engine delivers.

A primitive way to implement cruise control is simply to lock the throttle position when the driver engages cruise control. However, if the cruise control is engaged on a stretch of non-flat road, then the car will travel slower going uphill and faster when going downhill. This type of controller is called an open-loop controller because there is no feedback; no measurement of the system output (the car's speed) is used to alter the control (the throttle position.) As a result, the controller cannot compensate for changes acting on the car, like a change in the slope of the road.

In a closed-loop control system, data from a sensor monitoring the car's speed (the system output) enters a controller which continuously compares the quantity representing the speed with the reference quantity representing the desired speed. The difference, called the error, determines the throttle position (the control). The result is to match the car's speed to the reference speed (maintain the desired system output). Now, when the car goes uphill, the difference between the input (the sensed speed) and the reference continuously determines the throttle position. As the sensed speed drops below the reference, the difference increases, the throttle opens, and engine power increases, speeding up the vehicle. In this way, the controller dynamically counteracts changes to the car's speed. The central idea of these control systems is the feedback loop, the controller affects the system output, which in turn is measured and fed back to the controller.

Classical control theory

To overcome the limitations of the open-loop controller, control theory introduces feedback. A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.

Closed-loop controllers have the following advantages over open-loop controllers:

  • disturbance rejection (such as hills in the cruise control example above)
  • guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact
  • unstable processes can be stabilized
  • reduced sensitivity to parameter variations
  • improved reference tracking performance

In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.

A common closed-loop controller architecture is the PID controller.

Closed-loop transfer function

The output of the system y(t) is fed back through a sensor measurement F to a comparison with the reference value r(t). The controller C then takes the error e (difference) between the reference and the output to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller.

This is called a single-input-single-output (SISO) control system; MIMO (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions).

 

If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e., elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations:

 
 
 

Solving for Y(s) in terms of R(s) gives

 

The expression   is referred to as the closed-loop transfer function of the system. The numerator is the forward (open-loop) gain from r to y, and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If  , i.e., it has a large norm with each value of s, and if  , then Y(s) is approximately equal to R(s) and the output closely tracks the reference input.

PID feedback control

 
A block diagram of a PID controller in a feedback loop, r(t) is the desired process value or "set point", and y(t) is the measured process value.

A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism control technique widely used in control systems.

A PID controller continuously calculates an error value e(t) as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms. PID is an initialism for Proportional-Integral-Derivative, referring to the three terms operating on the error signal to produce a control signal.

The theoretical understanding and application dates from the 1920s, and they are implemented in nearly all analogue control systems; originally in mechanical controllers, and then using discrete electronics and later in industrial process computers. The PID controller is probably the most-used feedback control design.

If u(t) is the control signal sent to the system, y(t) is the measured output and r(t) is the desired output, and e(t) = r(t) − y(t) is the tracking error, a PID controller has the general form

 

The desired closed loop dynamics is obtained by adjusting the three parameters KP, KI and KD, often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well-established class of control systems: however, they cannot be used in several more complicated cases, especially if MIMO systems are considered.

Applying Laplace transformation results in the transformed PID controller equation

 
 

with the PID controller transfer function

 

As an example of tuning a PID controller in the closed-loop system H(s), consider a 1st order plant given by

 

where A and TP are some constants. The plant output is fed back through

 

where TF is also a constant. Now if we set  , KD = KTD, and  , we can express the PID controller transfer function in series form as

 

Plugging P(s), F(s), and C(s) into the closed-loop transfer function H(s), we find that by setting

 

H(s) = 1. With this tuning in this example, the system output follows the reference input exactly.

However, in practice, a pure differentiator is neither physically realizable nor desirable[14] due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach or a differentiator with low-pass roll-off are used instead.

Linear and nonlinear control theory

The field of control theory can be divided into two branches:

Analysis techniques - frequency domain and time domain

Mathematical techniques for analyzing and designing control systems fall into two different categories:

In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation,[citation needed] a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.[16][17]

System interfacing - SISO & MIMO

Control systems can be divided into different categories depending on the number of inputs and outputs.

  • Single-input single-output (SISO) – This is the simplest and most common type, in which one output is controlled by one control signal. Examples are the cruise control example above, or an audio system, in which the control input is the input audio signal and the output is the sound waves from the speaker.
  • Multiple-input multiple-output (MIMO) – These are found in more complicated systems. For example, modern large telescopes such as the Keck and MMT have mirrors composed of many separate segments each controlled by an actuator. The shape of the entire mirror is constantly adjusted by a MIMO active optics control system using input from multiple sensors at the focal plane, to compensate for changes in the mirror shape due to thermal expansion, contraction, stresses as it is rotated and distortion of the wavefront due to turbulence in the atmosphere. Complicated systems such as nuclear reactors and human cells are simulated by a computer as large MIMO control systems.

Topics in control theory

Stability

The stability of a general dynamical system with no input can be described with Lyapunov stability criteria.

For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems.

Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside

The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the   axis is the real axis and the discrete Z-transform is in circular coordinates where the   axis is the real axis.

When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero.

If a system in question has an impulse response of

 

then the Z-transform (see this example), is given by

 

which has a pole in   (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is inside the unit circle.

However, if the impulse response was

 

then the Z-transform is

 

which has a pole at   and is not BIBO stable since the pole has a modulus strictly greater than one.

Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots.

Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll.

Controllability and observability

Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed stabilizable. Observability instead is related to the possibility of observing, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable.

From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis.

Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors.

Control specification

Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control).

A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have  , where   is a fixed value strictly greater than zero, instead of simply asking that  .

Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included.

Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after).

Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI).

Model identification and robustness

A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible.

System identification

The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that  . Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal.

Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance.

Analysis

Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties.

Constraints

A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold.

System classifications

Linear systems control

For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design.

Nonlinear systems control

Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.[18]

Decentralized systems control

When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions.

Deterministic and stochastic systems control

A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks.

Main control strategies

Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen.

List of the main control techniques
  • Adaptive control uses on-line identification of the process parameters, or modification of controller gains, thereby obtaining strong robustness properties. Adaptive controls were applied for the first time in the aerospace industry in the 1950s, and have found particular success in that field.
  • A hierarchical control system is a type of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system.
  • Intelligent control uses various AI computing approaches like artificial neural networks, Bayesian probability, fuzzy logic,[19] machine learning, evolutionary computation and genetic algorithms or a combination of these methods, such as neuro-fuzzy algorithms, to control a dynamic system.
  • Optimal control is a particular control technique in which the control signal optimizes a certain "cost index": for example, in the case of a satellite, the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel. Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability. These are Model Predictive Control (MPC) and linear-quadratic-Gaussian control (LQG). The first can more explicitly take into account constraints on the signals in the system, which is an important feature in many industrial processes. However, the "optimal control" structure in MPC is only a means to achieve such a result, as it does not optimize a true performance index of the closed-loop control system. Together with PID controllers, MPC systems are the most widely used control technique in process control.
  • Robust control deals explicitly with uncertainty in its approach to controller design. Controllers designed using robust control methods tend to be able to cope with small differences between the true system and the nominal model used for design.[20] The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness. Examples of modern robust control techniques include H-infinity loop-shaping developed by Duncan McFarlane and Keith Glover, Sliding mode control (SMC) developed by Vadim Utkin, and safe protocols designed for control of large heterogeneous populations of electric loads in Smart Power Grid applications.[21] Robust methods aim to achieve robust performance and/or stability in the presence of small modeling errors.
  • Stochastic control deals with control design with uncertainty in the model. In typical stochastic control problems, it is assumed that there exist random noise and disturbances in the model and the controller, and the control design must take into account these random deviations.
  • Self-organized criticality control may be defined as attempts to interfere in the processes by which the self-organized system dissipates energy.

People in systems and control

Many active and historical figures made significant contribution to control theory including

See also

Examples of control systems
Topics in control theory
Other related topics

References

  1. ^ Maxwell, J. C. (1868). "On Governors" (PDF). Proceedings of the Royal Society. 100. (PDF) from the original on December 19, 2008.
  2. ^ Minorsky, Nicolas (1922). "Directional stability of automatically steered bodies". Journal of the American Society of Naval Engineers. 34 (2): 280–309. doi:10.1111/j.1559-3584.1922.tb04958.x.
  3. ^ GND. "Katalog der Deutschen Nationalbibliothek (Authority control)". portal.dnb.de. Retrieved April 26, 2020.
  4. ^ Maxwell, J.C. (1868). "On Governors". Proceedings of the Royal Society of London. 16: 270–283. doi:10.1098/rspl.1867.0055. JSTOR 112510.
  5. ^ "Control Theory: History, Mathematical Achievements and Perspectives | E. Fernandez-Cara1 and E. Zuazua". CiteSeerX 10.1.1.302.5633. {{cite journal}}: Cite journal requires |journal= (help)
  6. ^ Routh, E.J.; Fuller, A.T. (1975). Stability of motion. Taylor & Francis.
  7. ^ Routh, E.J. (1877). A Treatise on the Stability of a Given State of Motion, Particularly Steady Motion: Particularly Steady Motion. Macmillan and co.
  8. ^ Hurwitz, A. (1964). "On The Conditions Under Which An Equation Has Only Roots With Negative Real Parts". Selected Papers on Mathematical Trends in Control Theory.
  9. ^ Flugge-Lotz, Irmgard; Titus, Harold A. (October 1962). (PDF). Stanford University Technical Report (134): 8–12. Archived from the original (PDF) on April 27, 2019.
  10. ^ Hallion, Richard P. (1980). Sicherman, Barbara; Green, Carol Hurd; Kantrov, Ilene; Walker, Harriette (eds.). Notable American Women: The Modern Period: A Biographical Dictionary. Cambridge, Mass.: Belknap Press of Harvard University Press. pp. 241–242. ISBN 9781849722704.
  11. ^ "Feedback and control systems" - JJ Di Steffano, AR Stubberud, IJ Williams. Schaums outline series, McGraw-Hill 1967
  12. ^ Mayr, Otto (1970). The Origins of Feedback Control. Clinton, MA USA: The Colonial Press, Inc.
  13. ^ Mayr, Otto (1969). The Origins of Feedback Control. Clinton, MA USA: The Colonial Press, Inc.
  14. ^ Ang, K.H.; Chong, G.C.Y.; Li, Y. (2005). "PID control system analysis, design, and technology" (PDF). IEEE Transactions on Control Systems Technology. 13 (4): 559–576. doi:10.1109/TCST.2005.847331. S2CID 921620. (PDF) from the original on December 13, 2013.
  15. ^ "trim point".
  16. ^ Donald M Wiberg (1971). State space & linear systems. Schaum's outline series. McGraw Hill. ISBN 978-0-07-070096-3.
  17. ^ Terrell, William (1999). "Some fundamental control theory I: Controllability, observability, and duality —AND— Some fundamental control Theory II: Feedback linearization of single input nonlinear systems". American Mathematical Monthly. 106 (9): 705–719 and 812–828. doi:10.2307/2589614. JSTOR 2589614.
  18. ^ Gu Shi; et al. (2015). "Controllability of structural brain networks (Article Number 8414)". Nature Communications. 6 (6): 8414. arXiv:1406.5197. Bibcode:2015NatCo...6.8414G. doi:10.1038/ncomms9414. PMC 4600713. PMID 26423222. Here we use tools from control and network theories to offer a mechanistic explanation for how the brain moves between cognitive states drawn from the network organization of white matter microstructure
  19. ^ Liu, Jie; Wilson Wang; Farid Golnaraghi; Eric Kubica (2010). "A novel fuzzy framework for nonlinear system control". Fuzzy Sets and Systems. 161 (21): 2746–2759. doi:10.1016/j.fss.2010.04.009.
  20. ^ Melby, Paul; et., al. (2002). "Robustness of Adaptation in Controlled Self-Adjusting Chaotic Systems". Fluctuation and Noise Letters. 02 (4): L285–L292. doi:10.1142/S0219477502000919.
  21. ^ N. A. Sinitsyn. S. Kundu, S. Backhaus (2013). "Safe Protocols for Generating Power Pulses with Heterogeneous Populations of Thermostatically Controlled Loads". Energy Conversion and Management. 67: 297–308. arXiv:1211.0248. doi:10.1016/j.enconman.2012.11.021. S2CID 32067734.
  22. ^ Richard Bellman (1964). "Control Theory". Scientific American. Vol. 211, no. 3. pp. 186–200. doi:10.1038/scientificamerican0964-186.

Further reading

  • Levine, William S., ed. (1996). The Control Handbook. New York: CRC Press. ISBN 978-0-8493-8570-4.
  • Karl J. Åström; Richard M. Murray (2008). Feedback Systems: An Introduction for Scientists and Engineers (PDF). Princeton University Press. ISBN 978-0-691-13576-2.
  • Christopher Kilian (2005). Modern Control Technology. Thompson Delmar Learning. ISBN 978-1-4018-5806-3.
  • Vannevar Bush (1929). Operational Circuit Analysis. John Wiley and Sons, Inc.
  • Robert F. Stengel (1994). Optimal Control and Estimation. Dover Publications. ISBN 978-0-486-68200-6.
  • Franklin; et al. (2002). Feedback Control of Dynamic Systems (4 ed.). New Jersey: Prentice Hall. ISBN 978-0-13-032393-4.
  • Joseph L. Hellerstein; Dawn M. Tilbury; Sujay Parekh (2004). Feedback Control of Computing Systems. John Wiley and Sons. ISBN 978-0-471-26637-2.
  • Diederich Hinrichsen and Anthony J. Pritchard (2005). Mathematical Systems Theory I – Modelling, State Space Analysis, Stability and Robustness. Springer. ISBN 978-3-540-44125-0.
  • Andrei, Neculai (2005). "Modern Control Theory – A historical Perspective" (PDF). Retrieved October 10, 2007. {{cite journal}}: Cite journal requires |journal= (help)
  • Sontag, Eduardo (1998). Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition (PDF). Springer. ISBN 978-0-387-98489-6.
  • Goodwin, Graham (2001). Control System Design. Prentice Hall. ISBN 978-0-13-958653-8.
  • Christophe Basso (2012). Designing Control Loops for Linear and Switching Power Supplies: A Tutorial Guide. Artech House. ISBN 978-1608075577.
  • Boris J. Lurie; Paul J. Enright (2019). Classical Feedback Control with Nonlinear Multi-loop Systems (3 ed.). CRC Press. ISBN 978-1-1385-4114-6.
For Chemical Engineering
  • Luyben, William (1989). Process Modeling, Simulation, and Control for Chemical Engineers. McGraw Hill. ISBN 978-0-07-039159-8.

External links

  • Control Tutorials for Matlab, a set of worked-through control examples solved by several different methods.
  • Control Tuning and Best Practices
  • Advanced control structures, free on-line simulators explaining the control theory
  • The Dark Side of Loop Control Theory, a professional seminar taught at APEC in 2012 (Orlando, FL).

control, theory, this, article, about, control, theory, engineering, control, theory, linguistics, control, linguistics, control, theory, psychology, sociology, control, theory, sociology, perceptual, control, theory, field, mathematics, that, deals, with, con. This article is about control theory in engineering For control theory in linguistics see control linguistics For control theory in psychology and sociology see control theory sociology and Perceptual control theory Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state while minimizing any delay overshoot or steady state error and ensuring a level of control stability often with the aim to achieve a degree of optimality To do this a controller with the requisite corrective behavior is required This controller monitors the controlled process variable PV and compares it with the reference or set point SP The difference between actual and desired value of the process variable called the error signal or SP PV error is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point Other aspects which are also studied are controllability and observability Control theory is used in control system engineering to design automation that have revolutionized manufacturing aircraft communications and other industries and created new fields such as robotics Extensive use is usually made of a diagrammatic style known as the block diagram In it the transfer function also known as the system function or network function is a mathematical model of the relation between the input and output based on the differential equations describing the system Control theory dates from the 19th century when the theoretical basis for the operation of governors was first described by James Clerk Maxwell 1 Control theory was further advanced by Edward Routh in 1874 Charles Sturm and in 1895 Adolf Hurwitz who all contributed to the establishment of control stability criteria and from 1922 onwards the development of PID control theory by Nicolas Minorsky 2 Although a major application of mathematical control theory is in control systems engineering which deals with the design of process control systems for industry other applications range far beyond this As the general theory of feedback systems control theory is useful wherever feedback occurs thus control theory also has applications in life sciences computer engineering sociology and operations research 3 Contents 1 History 2 Open loop and closed loop feedback control 2 1 Other examples 3 Classical control theory 4 Closed loop transfer function 5 PID feedback control 6 Linear and nonlinear control theory 7 Analysis techniques frequency domain and time domain 8 System interfacing SISO amp MIMO 9 Topics in control theory 9 1 Stability 9 2 Controllability and observability 9 3 Control specification 9 4 Model identification and robustness 10 System classifications 10 1 Linear systems control 10 2 Nonlinear systems control 10 3 Decentralized systems control 10 4 Deterministic and stochastic systems control 11 Main control strategies 12 People in systems and control 13 See also 14 References 15 Further reading 16 External linksHistory Edit Centrifugal governor in a Boulton amp Watt engine of 1788 Although control systems of various types date back to antiquity a more formal analysis of the field began with a dynamics analysis of the centrifugal governor conducted by the physicist James Clerk Maxwell in 1868 entitled On Governors 4 A centrifugal governor was already used to regulate the velocity of windmills 5 Maxwell described and analyzed the phenomenon of self oscillation in which lags in the system may lead to overcompensation and unstable behavior This generated a flurry of interest in the topic during which Maxwell s classmate Edward John Routh abstracted Maxwell s results for the general class of linear systems 6 Independently Adolf Hurwitz analyzed system stability using differential equations in 1877 resulting in what is now known as the Routh Hurwitz theorem 7 8 A notable application of dynamic control was in the area of crewed flight The Wright brothers made their first successful test flights on December 17 1903 and were distinguished by their ability to control their flights for substantial periods more so than the ability to produce lift from an airfoil which was known Continuous reliable control of the airplane was necessary for flights lasting longer than a few seconds By World War II control theory was becoming an important area of research Irmgard Flugge Lotz developed the theory of discontinuous automatic control systems and applied the bang bang principle to the development of automatic flight control equipment for aircraft 9 10 Other areas of application for discontinuous controls included fire control systems guidance systems and electronics Sometimes mechanical methods are used to improve the stability of systems For example ship stabilizers are fins mounted beneath the waterline and emerging laterally In contemporary vessels they may be gyroscopically controlled active fins which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship The Space Race also depended on accurate spacecraft control and control theory has also seen an increasing use in fields such as economics and artificial intelligence Here one might say that the goal is to find an internal model that obeys the good regulator theorem So for example in economics the more accurately a stock or commodities trading model represents the actions of the market the more easily it can control that market and extract useful work profits from it In AI an example might be a chatbot modelling the discourse state of humans the more accurately it can model the human state e g on a telephone voice support hotline the better it can manipulate the human e g into performing the corrective actions to resolve the problem that caused the phone call to the help line These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion and broaden it into a vast generalization of a regulator interacting with a plant Open loop and closed loop feedback control Edit A block diagram of a negative feedback control system using a feedback loop to control the process variable by comparing it with a desired value and applying the difference as an error signal to generate a control output to reduce or eliminate the error Example of a single industrial control loop showing continuously modulated control of process flow Fundamentally there are two types of control loops open loop control and closed loop feedback control In open loop control the control action from the controller is independent of the process output or controlled process variable PV A good example of this is a central heating boiler controlled only by a timer so that heat is applied for a constant time regardless of the temperature of the building The control action is the timed switching on off of the boiler the process variable is the building temperature but neither is linked In closed loop control the control action from the controller is dependent on feedback from the process in the form of the value of the process variable PV In the case of the boiler analogy a closed loop would include a thermostat to compare the building temperature PV with the temperature set on the thermostat the set point SP This generates a controller output to maintain the building at the desired temperature by switching the boiler on and off A closed loop controller therefore has a feedback loop which ensures the controller exerts a control action to manipulate the process variable to be the same as the Reference input or set point For this reason closed loop controllers are also called feedback controllers 11 The definition of a closed loop control system according to the British Standard Institution is a control system possessing monitoring feedback the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero 12 Likewise A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control 13 Other examples Edit An example of a control system is a car s cruise control which is a device designed to maintain vehicle speed at a constant desired or reference speed provided by the driver The controller is the cruise control the plant is the car and the system is the car and the cruise control The system output is the car s speed and the control itself is the engine s throttle position which determines how much power the engine delivers A primitive way to implement cruise control is simply to lock the throttle position when the driver engages cruise control However if the cruise control is engaged on a stretch of non flat road then the car will travel slower going uphill and faster when going downhill This type of controller is called an open loop controller because there is no feedback no measurement of the system output the car s speed is used to alter the control the throttle position As a result the controller cannot compensate for changes acting on the car like a change in the slope of the road In a closed loop control system data from a sensor monitoring the car s speed the system output enters a controller which continuously compares the quantity representing the speed with the reference quantity representing the desired speed The difference called the error determines the throttle position the control The result is to match the car s speed to the reference speed maintain the desired system output Now when the car goes uphill the difference between the input the sensed speed and the reference continuously determines the throttle position As the sensed speed drops below the reference the difference increases the throttle opens and engine power increases speeding up the vehicle In this way the controller dynamically counteracts changes to the car s speed The central idea of these control systems is the feedback loop the controller affects the system output which in turn is measured and fed back to the controller Classical control theory EditMain article Classical control theory To overcome the limitations of the open loop controller control theory introduces feedback A closed loop controller uses feedback to control states or outputs of a dynamical system Its name comes from the information path in the system process inputs e g voltage applied to an electric motor have an effect on the process outputs e g speed or torque of the motor which is measured with sensors and processed by the controller the result the control signal is fed back as input to the process closing the loop Closed loop controllers have the following advantages over open loop controllers disturbance rejection such as hills in the cruise control example above guaranteed performance even with model uncertainties when the model structure does not match perfectly the real process and the model parameters are not exact unstable processes can be stabilized reduced sensitivity to parameter variations improved reference tracking performanceIn some systems closed loop and open loop control are used simultaneously In such systems the open loop control is termed feedforward and serves to further improve reference tracking performance A common closed loop controller architecture is the PID controller Closed loop transfer function EditFurther information closed loop transfer function The output of the system y t is fed back through a sensor measurement F to a comparison with the reference value r t The controller C then takes the error e difference between the reference and the output to change the inputs u to the system under control P This is shown in the figure This kind of controller is a closed loop controller or feedback controller This is called a single input single output SISO control system MIMO i e Multi Input Multi Output systems with more than one input output are common In such cases variables are represented through vectors instead of simple scalar values For some distributed parameter systems the vectors may be infinite dimensional typically functions If we assume the controller C the plant P and the sensor F are linear and time invariant i e elements of their transfer function C s P s and F s do not depend on time the systems above can be analysed using the Laplace transform on the variables This gives the following relations Y s P s U s displaystyle Y s P s U s U s C s E s displaystyle U s C s E s E s R s F s Y s displaystyle E s R s F s Y s Solving for Y s in terms of R s gives Y s P s C s 1 P s C s F s R s H s R s displaystyle Y s left frac P s C s 1 P s C s F s right R s H s R s The expression H s P s C s 1 F s P s C s displaystyle H s frac P s C s 1 F s P s C s is referred to as the closed loop transfer function of the system The numerator is the forward open loop gain from r to y and the denominator is one plus the gain in going around the feedback loop the so called loop gain If P s C s 1 displaystyle P s C s gg 1 i e it has a large norm with each value of s and if F s 1 displaystyle F s approx 1 then Y s is approximately equal to R s and the output closely tracks the reference input PID feedback control EditMain article PID controller A block diagram of a PID controller in a feedback loop r t is the desired process value or set point and y t is the measured process value A proportional integral derivative controller PID controller is a control loop feedback mechanism control technique widely used in control systems A PID controller continuously calculates an error value e t as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional integral and derivative terms PID is an initialism for Proportional Integral Derivative referring to the three terms operating on the error signal to produce a control signal The theoretical understanding and application dates from the 1920s and they are implemented in nearly all analogue control systems originally in mechanical controllers and then using discrete electronics and later in industrial process computers The PID controller is probably the most used feedback control design If u t is the control signal sent to the system y t is the measured output and r t is the desired output and e t r t y t is the tracking error a PID controller has the general form u t K P e t K I t e t d t K D d e t d t displaystyle u t K P e t K I int t e tau text d tau K D frac text d e t text d t The desired closed loop dynamics is obtained by adjusting the three parameters KP KI and KD often iteratively by tuning and without specific knowledge of a plant model Stability can often be ensured using only the proportional term The integral term permits the rejection of a step disturbance often a striking specification in process control The derivative term is used to provide damping or shaping of the response PID controllers are the most well established class of control systems however they cannot be used in several more complicated cases especially if MIMO systems are considered Applying Laplace transformation results in the transformed PID controller equation u s K P e s K I 1 s e s K D s e s displaystyle u s K P e s K I frac 1 s e s K D s e s u s K P K I 1 s K D s e s displaystyle u s left K P K I frac 1 s K D s right e s with the PID controller transfer function C s K P K I 1 s K D s displaystyle C s left K P K I frac 1 s K D s right As an example of tuning a PID controller in the closed loop system H s consider a 1st order plant given by P s A 1 s T P displaystyle P s frac A 1 sT P where A and TP are some constants The plant output is fed back through F s 1 1 s T F displaystyle F s frac 1 1 sT F where TF is also a constant Now if we set K P K 1 T D T I displaystyle K P K left 1 frac T D T I right KD KTD and K I K T I displaystyle K I frac K T I we can express the PID controller transfer function in series form as C s K 1 1 s T I 1 s T D displaystyle C s K left 1 frac 1 sT I right 1 sT D Plugging P s F s and C s into the closed loop transfer function H s we find that by setting K 1 A T I T F T D T P displaystyle K frac 1 A T I T F T D T P H s 1 With this tuning in this example the system output follows the reference input exactly However in practice a pure differentiator is neither physically realizable nor desirable 14 due to amplification of noise and resonant modes in the system Therefore a phase lead compensator type approach or a differentiator with low pass roll off are used instead Linear and nonlinear control theory EditThe field of control theory can be divided into two branches Linear control theory This applies to systems made of devices which obey the superposition principle which means roughly that the output is proportional to the input They are governed by linear differential equations A major subclass is systems which in addition have parameters which do not change with time called linear time invariant LTI systems These systems are amenable to powerful frequency domain mathematical techniques of great generality such as the Laplace transform Fourier transform Z transform Bode plot root locus and Nyquist stability criterion These lead to a description of the system using terms like bandwidth frequency response eigenvalues gain resonant frequencies zeros and poles which give solutions for system response and design techniques for most systems of interest Nonlinear control theory This covers a wider class of systems that do not obey the superposition principle and applies to more real world systems because all real control systems are nonlinear These systems are often governed by nonlinear differential equations The few mathematical techniques which have been developed to handle them are more difficult and much less general often applying only to narrow categories of systems These include limit cycle theory Poincare maps Lyapunov stability theorem and describing functions Nonlinear systems are often analyzed using numerical methods on computers for example by simulating their operation using a simulation language If only solutions near a stable point are of interest nonlinear systems can often be linearized by approximating them by a linear system using perturbation theory and linear techniques can be used 15 Analysis techniques frequency domain and time domain EditMathematical techniques for analyzing and designing control systems fall into two different categories Frequency domain In this type the values of the state variables the mathematical variables representing the system s input output and feedback are represented as functions of frequency The input signal and the system s transfer function are converted from time functions to functions of frequency by a transform such as the Fourier transform Laplace transform or Z transform The advantage of this technique is that it results in a simplification of the mathematics the differential equations that represent the system are replaced by algebraic equations in the frequency domain which is much simpler to solve However frequency domain techniques can only be used with linear systems as mentioned above Time domain state space representation In this type the values of the state variables are represented as functions of time With this model the system being analyzed is represented by one or more differential equations Since frequency domain techniques are limited to linear systems time domain is widely used to analyze real world nonlinear systems Although these are more difficult to solve modern computer simulation techniques such as simulation languages have made their analysis routine In contrast to the frequency domain analysis of the classical control theory modern control theory utilizes the time domain state space representation citation needed a mathematical model of a physical system as a set of input output and state variables related by first order differential equations To abstract from the number of inputs outputs and states the variables are expressed as vectors and the differential and algebraic equations are written in matrix form the latter only being possible when the dynamical system is linear The state space representation also known as the time domain approach provides a convenient and compact way to model and analyze systems with multiple inputs and outputs With inputs and outputs we would otherwise have to write down Laplace transforms to encode all the information about a system Unlike the frequency domain approach the use of the state space representation is not limited to systems with linear components and zero initial conditions State space refers to the space whose axes are the state variables The state of the system can be represented as a point within that space 16 17 System interfacing SISO amp MIMO EditControl systems can be divided into different categories depending on the number of inputs and outputs Single input single output SISO This is the simplest and most common type in which one output is controlled by one control signal Examples are the cruise control example above or an audio system in which the control input is the input audio signal and the output is the sound waves from the speaker Multiple input multiple output MIMO These are found in more complicated systems For example modern large telescopes such as the Keck and MMT have mirrors composed of many separate segments each controlled by an actuator The shape of the entire mirror is constantly adjusted by a MIMO active optics control system using input from multiple sensors at the focal plane to compensate for changes in the mirror shape due to thermal expansion contraction stresses as it is rotated and distortion of the wavefront due to turbulence in the atmosphere Complicated systems such as nuclear reactors and human cells are simulated by a computer as large MIMO control systems Topics in control theory EditStability Edit The stability of a general dynamical system with no input can be described with Lyapunov stability criteria A linear system is called bounded input bounded output BIBO stable if its output will stay bounded for any bounded input Stability for nonlinear systems that take an input is input to state stability ISS which combines Lyapunov stability and a notion similar to BIBO stability For simplicity the following descriptions focus on continuous time and discrete time linear systems Mathematically this means that for a causal linear system to be stable all of the poles of its transfer function must have negative real values i e the real part of each pole must be less than zero Practically speaking stability requires that the transfer function complex poles reside in the open left half of the complex plane for continuous time when the Laplace transform is used to obtain the transfer function inside the unit circle for discrete time when the Z transform is used The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions The continuous Laplace transform is in Cartesian coordinates where the x displaystyle x axis is the real axis and the discrete Z transform is in circular coordinates where the r displaystyle rho axis is the real axis When the appropriate conditions above are satisfied a system is said to be asymptotically stable the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations Permanent oscillations occur when a pole has a real part exactly equal to zero in the continuous time case or a modulus equal to one in the discrete time case If a simply stable system response neither decays nor grows over time and has no oscillations it is marginally stable in this case the system transfer function has non repeated poles at the complex plane origin i e their real and complex component is zero in the continuous time case Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero If a system in question has an impulse response of x n 0 5 n u n displaystyle x n 0 5 n u n then the Z transform see this example is given by X z 1 1 0 5 z 1 displaystyle X z frac 1 1 0 5z 1 which has a pole in z 0 5 displaystyle z 0 5 zero imaginary part This system is BIBO asymptotically stable since the pole is inside the unit circle However if the impulse response was x n 1 5 n u n displaystyle x n 1 5 n u n then the Z transform is X z 1 1 1 5 z 1 displaystyle X z frac 1 1 1 5z 1 which has a pole at z 1 5 displaystyle z 1 5 and is not BIBO stable since the pole has a modulus strictly greater than one Numerous tools exist for the analysis of the poles of a system These include graphical systems like the root locus Bode plots or the Nyquist plots Mechanical changes can make equipment and control systems more stable Sailors add ballast to improve the stability of ships Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet 10 m and are continuously rotated about their axes to develop forces that oppose the roll Controllability and observability Edit Main articles Controllability and Observability Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied or whether it is even possible to control or stabilize the system Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal If a state is not controllable then no signal will ever be able to control the state If a state is not controllable but its dynamics are stable then the state is termed stabilizable Observability instead is related to the possibility of observing through output measurements the state of a system If a state is not observable the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system However similar to the stabilizability condition above if a state cannot be observed it might still be detectable From a geometrical point of view looking at the states of each variable of the system to be controlled every bad state of these variables must be controllable and observable to ensure a good behavior in the closed loop system That is if one of the eigenvalues of the system is not both controllable and observable this part of the dynamics will remain untouched in the closed loop system If such an eigenvalue is not stable the dynamics of this eigenvalue will be present in the closed loop system which therefore will be unstable Unobservable poles are not present in the transfer function realization of a state space representation which is why sometimes the latter is preferred in dynamical systems analysis Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors Control specification Edit Several different control strategies have been devised in the past years These vary from extremely general ones PID controller to others devoted to very particular classes of systems especially robotics or aircraft cruise control A control problem can have several specifications Stability of course is always present The controller must ensure that the closed loop system is stable regardless of the open loop stability A poor choice of controller can even worsen the stability of the open loop system which must normally be avoided Sometimes it would be desired to obtain particular dynamics in the closed loop i e that the poles have R e l lt l displaystyle Re lambda lt overline lambda where l displaystyle overline lambda is a fixed value strictly greater than zero instead of simply asking that R e l lt 0 displaystyle Re lambda lt 0 Another typical specification is the rejection of a step disturbance including an integrator in the open loop chain i e directly before the system under control easily achieves this Other classes of disturbances need different types of sub systems to be included Other classical control theory specifications regard the time response of the closed loop system These include the rise time the time needed by the control system to reach the desired value after a perturbation peak overshoot the highest value reached by the response before reaching the desired value and others settling time quarter decay Frequency domain specifications are usually related to robustness see after Modern performance assessments use some variation of integrated tracking error IAE ISA CQI Model identification and robustness Edit A control system must always have some robustness property A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis This requirement is important as no real physical system truly behaves like the series of differential equations used to represent it mathematically Typically a simpler mathematical model is chosen in order to simplify calculations otherwise the true system dynamics can be so complicated that a complete model is impossible System identificationFurther information System identification The process of determining the equations that govern the model s dynamics is called system identification This can be done off line for example executing a series of measures from which to calculate an approximated mathematical model typically its transfer function or matrix Such identification from the output however cannot take account of unobservable dynamics Sometimes the model is built directly starting from known physical equations for example in the case of a mass spring damper system we know that m x t K x t B x t displaystyle m ddot x t Kx t mathrm B dot x t Even assuming that a complete model is used in designing the controller all the parameters included in these equations called nominal parameters are never known with absolute precision the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal Some advanced control techniques include an on line identification process see later The parameters of the model are calculated identified while the controller itself is running In this way if a drastic variation of the parameters ensues for example if the robot s arm releases a weight the controller will adjust itself consequently in order to ensure the correct performance AnalysisAnalysis of the robustness of a SISO single input single output control system can be performed in the frequency domain considering the system s transfer function and using Nyquist and Bode diagrams Topics include gain and phase margin and amplitude margin For MIMO multi input multi output and in general more complicated control systems one must consider the theoretical results devised for each control technique see next section I e if particular robustness qualities are needed the engineer must shift their attention to a control technique by including these qualities in its properties ConstraintsA particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints In the physical world every signal is limited It could happen that a controller will send control signals that cannot be followed by the physical system for example trying to rotate a valve at excessive speed This can produce undesired behavior of the closed loop system or even damage or break actuators or other subsystems Specific control techniques are available to solve the problem model predictive control see later and anti wind up systems The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold System classifications EditLinear systems control Edit Main article State space controls For MIMO systems pole placement can be performed mathematically using a state space representation of the open loop system and calculating a feedback matrix assigning poles in the desired positions In complicated systems this can require computer assisted calculation capabilities and cannot always ensure robustness Furthermore all system states are not in general measured and so observers must be included and incorporated in pole placement design Nonlinear systems control Edit Main article Nonlinear control Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems These e g feedback linearization backstepping sliding mode control trajectory linearization control normally take advantage of results based on Lyapunov s theory Differential geometry has been widely used as a tool for generalizing well known linear control concepts to the nonlinear case as well as showing the subtleties that make it a more challenging problem Control theory has also been used to decipher the neural mechanism that directs cognitive states 18 Decentralized systems control Edit Main article Distributed control system When the system is controlled by multiple controllers the problem is one of decentralized control Decentralization is helpful in many ways for instance it helps control systems to operate over a larger geographical area The agents in decentralized control systems can interact using communication channels and coordinate their actions Deterministic and stochastic systems control Edit Main article Stochastic control A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system A deterministic control problem is not subject to external random shocks Main control strategies EditEvery control system must guarantee first the stability of the closed loop behavior For linear systems this can be obtained by directly placing the poles Nonlinear control systems use specific theories normally based on Aleksandr Lyapunov s Theory to ensure stability without regard to the inner dynamics of the system The possibility to fulfill different specifications varies from the model considered and the control strategy chosen List of the main control techniquesAdaptive control uses on line identification of the process parameters or modification of controller gains thereby obtaining strong robustness properties Adaptive controls were applied for the first time in the aerospace industry in the 1950s and have found particular success in that field A hierarchical control system is a type of control system in which a set of devices and governing software is arranged in a hierarchical tree When the links in the tree are implemented by a computer network then that hierarchical control system is also a form of networked control system Intelligent control uses various AI computing approaches like artificial neural networks Bayesian probability fuzzy logic 19 machine learning evolutionary computation and genetic algorithms or a combination of these methods such as neuro fuzzy algorithms to control a dynamic system Optimal control is a particular control technique in which the control signal optimizes a certain cost index for example in the case of a satellite the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel Two optimal control design methods have been widely used in industrial applications as it has been shown they can guarantee closed loop stability These are Model Predictive Control MPC and linear quadratic Gaussian control LQG The first can more explicitly take into account constraints on the signals in the system which is an important feature in many industrial processes However the optimal control structure in MPC is only a means to achieve such a result as it does not optimize a true performance index of the closed loop control system Together with PID controllers MPC systems are the most widely used control technique in process control Robust control deals explicitly with uncertainty in its approach to controller design Controllers designed using robust control methods tend to be able to cope with small differences between the true system and the nominal model used for design 20 The early methods of Bode and others were fairly robust the state space methods invented in the 1960s and 1970s were sometimes found to lack robustness Examples of modern robust control techniques include H infinity loop shaping developed by Duncan McFarlane and Keith Glover Sliding mode control SMC developed by Vadim Utkin and safe protocols designed for control of large heterogeneous populations of electric loads in Smart Power Grid applications 21 Robust methods aim to achieve robust performance and or stability in the presence of small modeling errors Stochastic control deals with control design with uncertainty in the model In typical stochastic control problems it is assumed that there exist random noise and disturbances in the model and the controller and the control design must take into account these random deviations Self organized criticality control may be defined as attempts to interfere in the processes by which the self organized system dissipates energy People in systems and control EditMain article People in systems and control Many active and historical figures made significant contribution to control theory including Pierre Simon Laplace invented the Z transform in his work on probability theory now used to solve discrete time control theory problems The Z transform is a discrete time equivalent of the Laplace transform which is named after him Irmgard Flugge Lotz developed the theory of discontinuous automatic control and applied it to automatic aircraft control systems Alexander Lyapunov in the 1890s marks the beginning of stability theory Harold S Black invented the concept of negative feedback amplifiers in 1927 He managed to develop stable negative feedback amplifiers in the 1930s Harry Nyquist developed the Nyquist stability criterion for feedback systems in the 1930s Richard Bellman developed dynamic programming in the 1940s 22 Warren E Dixon control theorist and a professor Andrey Kolmogorov co developed the Wiener Kolmogorov filter in 1941 Norbert Wiener co developed the Wiener Kolmogorov filter and coined the term cybernetics in the 1940s John R Ragazzini introduced digital control and the use of Z transform in control theory invented by Laplace in the 1950s Lev Pontryagin introduced the maximum principle and the bang bang principle Pierre Louis Lions developed viscosity solutions into stochastic control and optimal control methods Rudolf E Kalman pioneered the state space approach to systems and control Introduced the notions of controllability and observability Developed the Kalman filter for linear estimation Ali H Nayfeh who was one of the main contributors to nonlinear control theory and published many books on perturbation methods Jan C Willems Introduced the concept of dissipativity as a generalization of Lyapunov function to input state output systems The construction of the storage function as the analogue of a Lyapunov function is called led to the study of the linear matrix inequality LMI in control theory He pioneered the behavioral approach to mathematical systems theory See also Edit Systems science portalExamples of control systemsAutomation Deadbeat controller Distributed parameter systems Fractional order control H infinity loop shaping Hierarchical control system Model predictive control Optimal control Process control Robust control Servomechanism State space controls Vector control Topics in control theoryCoefficient diagram method Control reconfiguration Feedback H infinity Hankel singular value Krener s theorem Lead lag compensator Minor loop feedback Multi loop feedback Positive systems Radial basis function Root locus Signal flow graphs Stable polynomial State space representation Steady state Transient response Transient state Underactuation Youla Kucera parametrization Markov chain approximation method Other related topicsAdaptive system Automation and remote control Bond graph Control engineering Control feedback abort loop Controller control theory Cybernetics Intelligent control Mathematical system theory Negative feedback amplifier People in systems and control Perceptual control theory Systems theory Time scale calculusReferences Edit Maxwell J C 1868 On Governors PDF Proceedings of the Royal Society 100 Archived PDF from the original on December 19 2008 Minorsky Nicolas 1922 Directional stability of automatically steered bodies Journal of the American Society of Naval Engineers 34 2 280 309 doi 10 1111 j 1559 3584 1922 tb04958 x GND Katalog der Deutschen Nationalbibliothek Authority control portal dnb de Retrieved April 26 2020 Maxwell J C 1868 On Governors Proceedings of the Royal Society of London 16 270 283 doi 10 1098 rspl 1867 0055 JSTOR 112510 Control Theory History Mathematical Achievements and Perspectives E Fernandez Cara1 and E Zuazua CiteSeerX 10 1 1 302 5633 a href Template Cite journal html title Template Cite journal cite journal a Cite journal requires journal help Routh E J Fuller A T 1975 Stability of motion Taylor amp Francis Routh E J 1877 A Treatise on the Stability of a Given State of Motion Particularly Steady Motion Particularly Steady Motion Macmillan and co Hurwitz A 1964 On The Conditions Under Which An Equation Has Only Roots With Negative Real Parts Selected Papers on Mathematical Trends in Control Theory Flugge Lotz Irmgard Titus Harold A October 1962 Optimum and Quasi Optimum Control of Third and Fourth Order Systems PDF Stanford University Technical Report 134 8 12 Archived from the original PDF on April 27 2019 Hallion Richard P 1980 Sicherman Barbara Green Carol Hurd Kantrov Ilene Walker Harriette eds Notable American Women The Modern Period A Biographical Dictionary Cambridge Mass Belknap Press of Harvard University Press pp 241 242 ISBN 9781849722704 Feedback and control systems JJ Di Steffano AR Stubberud IJ Williams Schaums outline series McGraw Hill 1967 Mayr Otto 1970 The Origins of Feedback Control Clinton MA USA The Colonial Press Inc Mayr Otto 1969 The Origins of Feedback Control Clinton MA USA The Colonial Press Inc Ang K H Chong G C Y Li Y 2005 PID control system analysis design and technology PDF IEEE Transactions on Control Systems Technology 13 4 559 576 doi 10 1109 TCST 2005 847331 S2CID 921620 Archived PDF from the original on December 13 2013 trim point Donald M Wiberg 1971 State space amp linear systems Schaum s outline series McGraw Hill ISBN 978 0 07 070096 3 Terrell William 1999 Some fundamental control theory I Controllability observability and duality AND Some fundamental control Theory II Feedback linearization of single input nonlinear systems American Mathematical Monthly 106 9 705 719 and 812 828 doi 10 2307 2589614 JSTOR 2589614 Gu Shi et al 2015 Controllability of structural brain networks Article Number 8414 Nature Communications 6 6 8414 arXiv 1406 5197 Bibcode 2015NatCo 6 8414G doi 10 1038 ncomms9414 PMC 4600713 PMID 26423222 Here we use tools from control and network theories to offer a mechanistic explanation for how the brain moves between cognitive states drawn from the network organization of white matter microstructure Liu Jie Wilson Wang Farid Golnaraghi Eric Kubica 2010 A novel fuzzy framework for nonlinear system control Fuzzy Sets and Systems 161 21 2746 2759 doi 10 1016 j fss 2010 04 009 Melby Paul et al 2002 Robustness of Adaptation in Controlled Self Adjusting Chaotic Systems Fluctuation and Noise Letters 02 4 L285 L292 doi 10 1142 S0219477502000919 N A Sinitsyn S Kundu S Backhaus 2013 Safe Protocols for Generating Power Pulses with Heterogeneous Populations of Thermostatically Controlled Loads Energy Conversion and Management 67 297 308 arXiv 1211 0248 doi 10 1016 j enconman 2012 11 021 S2CID 32067734 Richard Bellman 1964 Control Theory Scientific American Vol 211 no 3 pp 186 200 doi 10 1038 scientificamerican0964 186 Further reading EditLevine William S ed 1996 The Control Handbook New York CRC Press ISBN 978 0 8493 8570 4 Karl J Astrom Richard M Murray 2008 Feedback Systems An Introduction for Scientists and Engineers PDF Princeton University Press ISBN 978 0 691 13576 2 Christopher Kilian 2005 Modern Control Technology Thompson Delmar Learning ISBN 978 1 4018 5806 3 Vannevar Bush 1929 Operational Circuit Analysis John Wiley and Sons Inc Robert F Stengel 1994 Optimal Control and Estimation Dover Publications ISBN 978 0 486 68200 6 Franklin et al 2002 Feedback Control of Dynamic Systems 4 ed New Jersey Prentice Hall ISBN 978 0 13 032393 4 Joseph L Hellerstein Dawn M Tilbury Sujay Parekh 2004 Feedback Control of Computing Systems John Wiley and Sons ISBN 978 0 471 26637 2 Diederich Hinrichsen and Anthony J Pritchard 2005 Mathematical Systems Theory I Modelling State Space Analysis Stability and Robustness Springer ISBN 978 3 540 44125 0 Andrei Neculai 2005 Modern Control Theory A historical Perspective PDF Retrieved October 10 2007 a href Template Cite journal html title Template Cite journal cite journal a Cite journal requires journal help Sontag Eduardo 1998 Mathematical Control Theory Deterministic Finite Dimensional Systems Second Edition PDF Springer ISBN 978 0 387 98489 6 Goodwin Graham 2001 Control System Design Prentice Hall ISBN 978 0 13 958653 8 Christophe Basso 2012 Designing Control Loops for Linear and Switching Power Supplies A Tutorial Guide Artech House ISBN 978 1608075577 Boris J Lurie Paul J Enright 2019 Classical Feedback Control with Nonlinear Multi loop Systems 3 ed CRC Press ISBN 978 1 1385 4114 6 For Chemical EngineeringLuyben William 1989 Process Modeling Simulation and Control for Chemical Engineers McGraw Hill ISBN 978 0 07 039159 8 External links Edit Wikimedia Commons has media related to Control theory Wikibooks has a book on the topic of Control Systems Control Tutorials for Matlab a set of worked through control examples solved by several different methods Control Tuning and Best Practices Advanced control structures free on line simulators explaining the control theory The Dark Side of Loop Control Theory a professional seminar taught at APEC in 2012 Orlando FL Retrieved from https en wikipedia org w index php title Control theory amp oldid 1134195005, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.