fbpx
Wikipedia

Pontryagin's maximum principle

Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of the control Hamiltonian.[a] These necessary conditions become sufficient under certain convexity conditions on the objective and constraint functions.[1][2]

The maximum principle was formulated in 1956 by the Russian mathematician Lev Pontryagin and his students,[3][4] and its initial application was to the maximization of the terminal speed of a rocket.[5] The result was derived using ideas from the classical calculus of variations.[6] After a slight perturbation of the optimal control, one considers the first-order term of a Taylor expansion with respect to the perturbation; sending the perturbation to zero leads to a variational inequality from which the maximum principle follows.[7]

Widely regarded as a milestone in optimal control theory, the significance of the maximum principle lies in the fact that maximizing the Hamiltonian is much easier than the original infinite-dimensional control problem; rather than maximizing over a function space, the problem is converted to a pointwise optimization.[8] A similar logic leads to Bellman's principle of optimality, a related approach to optimal control problems which states that the optimal trajectory remains optimal at intermediate points in time.[9] The resulting Hamilton–Jacobi–Bellman equation provides a necessary and sufficient condition for an optimum, and admits a straightforward extension to stochastic optimal control problems, whereas the maximum principle does not.[7] However in contrast to the Hamilton–Jacobi–Bellman equation, which needs to hold over the entire state space to be valid, Pontryagin's Maximum Principle is potentially more computationally efficient in that the conditions which it specifies only need to hold over a particular trajectory.

Notation

For set   and functions       and   we use the following notation:

 
 
 
 
 

Formal statement of necessary conditions for minimization problem

Here the necessary conditions are shown for minimization of a functional. Take   to be the state of the dynamical system with input  , such that

 

where   is the set of admissible controls and   is the terminal (i.e., final) time of the system. The control   must be chosen for all   to minimize the objective functional   which is defined by the application and can be abstracted as

 

The constraints on the system dynamics can be adjoined to the Lagrangian   by introducing time-varying Lagrange multiplier vector  , whose elements are called the costates of the system. This motivates the construction of the Hamiltonian   defined for all   by:

 

where   is the transpose of  .

Pontryagin's minimum principle states that the optimal state trajectory  , optimal control  , and corresponding Lagrange multiplier vector   must minimize the Hamiltonian   so that

 

 

 

 

 

(1)

for all time   and for all permissible control inputs  . Additionally, the costate equation and its terminal conditions

 

 

 

 

 

(2)

 

 

 

 

 

(3)

must be satisfied. If the final state   is not fixed (i.e., its differential variation is not zero), it must also be that

 

 

 

 

 

(4)

These four conditions in (1)-(4) are the necessary conditions for an optimal control. Note that (4) only applies when   is free. If it is fixed, then this condition is not necessary for an optimum.

See also

Notes

  1. ^ Whether the extreme value is maximum or minimum depends on the sign convention used for defining the Hamiltonian. The historic convention leads to a maximum, hence maximum principle. In recent years, it is more commonly referred to as simply Pontryagin's Principle, without the use of the adjectives, maximum or minimum.

References

  1. ^ Mangasarian, O. L. (1966). "Sufficient Conditions for the Optimal Control of Nonlinear Systems". SIAM Journal on Control. 4 (1): 139–152. doi:10.1137/0304013.
  2. ^ Kamien, Morton I.; Schwartz, Nancy L. (1971). "Sufficient Conditions in Optimal Control Theory". Journal of Economic Theory. 3 (2): 207–214. doi:10.1016/0022-0531(71)90018-4.
  3. ^ Boltyanski, V.; Martini, H.; Soltan, V. (1998). "The Maximum Principle – How it came to be?". Geometric Methods and Optimization Problems. New York: Springer. pp. 204–227. ISBN 0-7923-5454-0.
  4. ^ Gamkrelidze, R. V. (1999). "Discovery of the Maximum Principle". Journal of Dynamical and Control Systems. 5 (4): 437–451. doi:10.1023/A:1021783020548. S2CID 122690986. Reprinted in Bolibruch, A. A.; et al., eds. (2006). Mathematical Events of the Twentieth Century. Berlin: Springer. pp. 85–99. ISBN 3-540-23235-4.
  5. ^ For first published works, see references in Fuller, A. T. (1963). "Bibliography of Pontryagin's Maximum Principle". J. Electronics & Control. 15 (5): 513–517. doi:10.1080/00207216308937602.
  6. ^ McShane, E. J. (1989). "The Calculus of Variations from the Beginning Through Optimal Control Theory". SIAM J. Control Optim. 27 (5): 916–939. doi:10.1137/0327049.
  7. ^ a b Yong, J.; Zhou, X. Y. (1999). "Maximum Principle and Stochastic Hamiltonian Systems". Stochastic Controls: Hamiltonian Systems and HJB Equations. New York: Springer. pp. 101–156. ISBN 0-387-98723-1.
  8. ^ Sastry, Shankar (March 29, 2009). "Lecture Notes 8. Optimal Control and Dynamic Games" (PDF).
  9. ^ Zhou, X. Y. (1990). "Maximum Principle, Dynamic Programming, and their Connection in Deterministic Control". Journal of Optimization Theory and Applications. 65 (2): 363–373. doi:10.1007/BF01102352. S2CID 122333807.

Further reading

  • Geering, H. P. (2007). Optimal Control with Engineering Applications. Springer. ISBN 978-3-540-69437-3.
  • Kirk, D. E. (1970). Optimal Control Theory: An Introduction. Prentice Hall. ISBN 0-486-43484-2.
  • Lee, E. B.; Markus, L. (1967). Foundations of Optimal Control Theory. New York: Wiley.
  • Seierstad, Atle; Sydsæter, Knut (1987). Optimal Control Theory with Economic Applications. Amsterdam: North-Holland. ISBN 0-444-87923-4.

External links

pontryagin, maximum, principle, used, optimal, control, theory, find, best, possible, control, taking, dynamical, system, from, state, another, especially, presence, constraints, state, input, controls, states, that, necessary, optimal, control, along, with, o. Pontryagin s maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another especially in the presence of constraints for the state or input controls It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so called Hamiltonian system which is a two point boundary value problem plus a maximum condition of the control Hamiltonian a These necessary conditions become sufficient under certain convexity conditions on the objective and constraint functions 1 2 The maximum principle was formulated in 1956 by the Russian mathematician Lev Pontryagin and his students 3 4 and its initial application was to the maximization of the terminal speed of a rocket 5 The result was derived using ideas from the classical calculus of variations 6 After a slight perturbation of the optimal control one considers the first order term of a Taylor expansion with respect to the perturbation sending the perturbation to zero leads to a variational inequality from which the maximum principle follows 7 Widely regarded as a milestone in optimal control theory the significance of the maximum principle lies in the fact that maximizing the Hamiltonian is much easier than the original infinite dimensional control problem rather than maximizing over a function space the problem is converted to a pointwise optimization 8 A similar logic leads to Bellman s principle of optimality a related approach to optimal control problems which states that the optimal trajectory remains optimal at intermediate points in time 9 The resulting Hamilton Jacobi Bellman equation provides a necessary and sufficient condition for an optimum and admits a straightforward extension to stochastic optimal control problems whereas the maximum principle does not 7 However in contrast to the Hamilton Jacobi Bellman equation which needs to hold over the entire state space to be valid Pontryagin s Maximum Principle is potentially more computationally efficient in that the conditions which it specifies only need to hold over a particular trajectory Contents 1 Notation 2 Formal statement of necessary conditions for minimization problem 3 See also 4 Notes 5 References 6 Further reading 7 External linksNotation EditFor set U displaystyle mathcal U and functions PS R n R displaystyle Psi mathbb R n to mathbb R H R n U R n R R displaystyle H mathbb R n times mathcal U times mathbb R n times mathbb R to mathbb R L R n U R displaystyle L mathbb R n times mathcal U to mathbb R and f R n U R n displaystyle f mathbb R n times mathcal U to mathbb R n we use the following notation PS T x T PS x T x x T displaystyle Psi T x T left frac partial Psi x partial T right x x T PS x x T PS x x 1 x x T PS x x n x x T displaystyle Psi x x T begin bmatrix left frac partial Psi x partial x 1 right x x T amp cdots amp left frac partial Psi x partial x n right x x T end bmatrix H x x u l t H x 1 x x u u l l H x n x x u u l l displaystyle H x x u lambda t begin bmatrix left frac partial H partial x 1 right x x u u lambda lambda amp cdots amp left frac partial H partial x n right x x u u lambda lambda end bmatrix L x x u L x 1 x x u u L x n x x u u displaystyle L x x u begin bmatrix left frac partial L partial x 1 right x x u u amp cdots amp left frac partial L partial x n right x x u u end bmatrix f x x u f 1 x 1 x x u u f 1 x n x x u u f n x 1 x x u u f n x n x x u u displaystyle f x x u begin bmatrix left frac partial f 1 partial x 1 right x x u u amp cdots amp left frac partial f 1 partial x n right x x u u vdots amp ddots amp vdots left frac partial f n partial x 1 right x x u u amp ldots amp left frac partial f n partial x n right x x u u end bmatrix Formal statement of necessary conditions for minimization problem EditHere the necessary conditions are shown for minimization of a functional Take x displaystyle x to be the state of the dynamical system with input u displaystyle u such that x f x u x 0 x 0 u t U t 0 T displaystyle dot x f x u quad x 0 x 0 quad u t in mathcal U quad t in 0 T where U displaystyle mathcal U is the set of admissible controls and T displaystyle T is the terminal i e final time of the system The control u U displaystyle u in mathcal U must be chosen for all t 0 T displaystyle t in 0 T to minimize the objective functional J displaystyle J which is defined by the application and can be abstracted as J PS x T 0 T L x t u t d t displaystyle J Psi x T int 0 T L x t u t dt The constraints on the system dynamics can be adjoined to the Lagrangian L displaystyle L by introducing time varying Lagrange multiplier vector l displaystyle lambda whose elements are called the costates of the system This motivates the construction of the Hamiltonian H displaystyle H defined for all t 0 T displaystyle t in 0 T by H x t u t l t t l T t f x t u t L x t u t displaystyle H x t u t lambda t t lambda rm T t f x t u t L x t u t where l T displaystyle lambda rm T is the transpose of l displaystyle lambda Pontryagin s minimum principle states that the optimal state trajectory x displaystyle x optimal control u displaystyle u and corresponding Lagrange multiplier vector l displaystyle lambda must minimize the Hamiltonian H displaystyle H so that H x t u t l t t H x t u l t t displaystyle H x t u t lambda t t leq H x t u lambda t t 1 for all time t 0 T displaystyle t in 0 T and for all permissible control inputs u U displaystyle u in mathcal U Additionally the costate equation and its terminal conditions l T t H x x t u t l t t l T t f x x t u t L x x t u t displaystyle dot lambda rm T t H x x t u t lambda t t lambda rm T t f x x t u t L x x t u t 2 l T T PS x x T displaystyle lambda rm T T Psi x x T 3 must be satisfied If the final state x T displaystyle x T is not fixed i e its differential variation is not zero it must also be that PS T x T H T 0 displaystyle Psi T x T H T 0 4 These four conditions in 1 4 are the necessary conditions for an optimal control Note that 4 only applies when x T displaystyle x T is free If it is fixed then this condition is not necessary for an optimum See also EditLagrange multipliers on Banach spaces Lagrangian method in calculus of variationsNotes Edit Whether the extreme value is maximum or minimum depends on the sign convention used for defining the Hamiltonian The historic convention leads to a maximum hence maximum principle In recent years it is more commonly referred to as simply Pontryagin s Principle without the use of the adjectives maximum or minimum References Edit Mangasarian O L 1966 Sufficient Conditions for the Optimal Control of Nonlinear Systems SIAM Journal on Control 4 1 139 152 doi 10 1137 0304013 Kamien Morton I Schwartz Nancy L 1971 Sufficient Conditions in Optimal Control Theory Journal of Economic Theory 3 2 207 214 doi 10 1016 0022 0531 71 90018 4 Boltyanski V Martini H Soltan V 1998 The Maximum Principle How it came to be Geometric Methods and Optimization Problems New York Springer pp 204 227 ISBN 0 7923 5454 0 Gamkrelidze R V 1999 Discovery of the Maximum Principle Journal of Dynamical and Control Systems 5 4 437 451 doi 10 1023 A 1021783020548 S2CID 122690986 Reprinted in Bolibruch A A et al eds 2006 Mathematical Events of the Twentieth Century Berlin Springer pp 85 99 ISBN 3 540 23235 4 For first published works see references in Fuller A T 1963 Bibliography of Pontryagin s Maximum Principle J Electronics amp Control 15 5 513 517 doi 10 1080 00207216308937602 McShane E J 1989 The Calculus of Variations from the Beginning Through Optimal Control Theory SIAM J Control Optim 27 5 916 939 doi 10 1137 0327049 a b Yong J Zhou X Y 1999 Maximum Principle and Stochastic Hamiltonian Systems Stochastic Controls Hamiltonian Systems and HJB Equations New York Springer pp 101 156 ISBN 0 387 98723 1 Sastry Shankar March 29 2009 Lecture Notes 8 Optimal Control and Dynamic Games PDF Zhou X Y 1990 Maximum Principle Dynamic Programming and their Connection in Deterministic Control Journal of Optimization Theory and Applications 65 2 363 373 doi 10 1007 BF01102352 S2CID 122333807 Further reading EditGeering H P 2007 Optimal Control with Engineering Applications Springer ISBN 978 3 540 69437 3 Kirk D E 1970 Optimal Control Theory An Introduction Prentice Hall ISBN 0 486 43484 2 Lee E B Markus L 1967 Foundations of Optimal Control Theory New York Wiley Seierstad Atle Sydsaeter Knut 1987 Optimal Control Theory with Economic Applications Amsterdam North Holland ISBN 0 444 87923 4 External links Edit Pontryagin maximum principle Encyclopedia of Mathematics EMS Press 2001 1994 Retrieved from https en wikipedia org w index php title Pontryagin 27s maximum principle amp oldid 1113342664, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.