fbpx
Wikipedia

Constrained optimization

In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

Relation to constraint-satisfaction problems edit

The constrained-optimization problem (COP) is a significant generalization of the classic constraint-satisfaction problem (CSP) model.[1] COP is a CSP that includes an objective function to be optimized. Many algorithms are used to handle the optimization part.

General form edit

A general constrained minimization problem may be written as follows:[2]

 

where   and   are constraints that are required to be satisfied (these are called hard constraints), and   is the objective function that needs to be optimized subject to the constraints.

In some problems, often called constraint optimization problems, the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to be satisfied) is violated.

Solution methods edit

Many constrained optimization algorithms can be adapted to the unconstrained case, often via the use of a penalty method. However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence. This is referred to as the Maratos effect.[3]

Equality constraints edit

Substitution method edit

For very simple problems, say a function of two variables subject to a single equality constraint, it is most practical to apply the method of substitution.[4] The idea is to substitute the constraint into the objective function to create a composite function that incorporates the effect of the constraint. For example, assume the objective is to maximize   subject to  . The constraint implies  , which can be substituted into the objective function to create  . The first-order necessary condition gives  , which can be solved for   and, consequently,  .

Lagrange multiplier edit

If the constrained problem has only equality constraints, the method of Lagrange multipliers can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints. Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables.

Inequality constraints edit

With inequality constraints, the problem can be characterized in terms of the geometric optimality conditions, Fritz John conditions and Karush–Kuhn–Tucker conditions, under which simple problems may be solvable.

Linear programming edit

If the objective function and all of the hard constraints are linear and some hard constraints are inequalities, then the problem is a linear programming problem. This can be solved by the simplex method, which usually works in polynomial time in the problem size but is not guaranteed to, or by interior point methods which are guaranteed to work in polynomial time.

Nonlinear programming edit

If the objective function or some of the constraints are nonlinear, and some constraints are inequalities, then the problem is a nonlinear programming problem.

Quadratic programming edit

If all the hard constraints are linear and some are inequalities, but the objective function is quadratic, the problem is a quadratic programming problem. It is one type of nonlinear programming. It can still be solved in polynomial time by the ellipsoid method if the objective function is convex; otherwise the problem may be NP hard.

KKT conditions edit

Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers. It can be applied under differentiability and convexity.

Branch and bound edit

Constraint optimization can be solved by branch-and-bound algorithms. These are backtracking algorithms storing the cost of the best solution found during execution and using it to avoid part of the search. More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost, the algorithm backtracks, instead of trying to extend this solution.

Assuming that cost is to be minimized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far.

On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists. As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible.

A variation of this approach called Hansen's method uses interval methods.[5] It inherently implements rectangular constraints.

First-choice bounding functions edit

One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately. For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed. The sum of these values is an upper bound because the soft constraints cannot assume a higher value. It is exact because the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for   while another constraint is maximal for  .

Russian doll search edit

This method[6] runs a branch-and-bound algorithm on   problems, where   is the number of variables. Each such problem is the subproblem obtained by dropping a sequence of variables   from the original problem, along with the constraints containing them. After the problem on variables   is solved, its optimal cost can be used as an upper bound while solving the other problems,

In particular, the cost estimate of a solution having   as unassigned variables is added to the cost that derives from the evaluated variables. Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved. More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point.

There is similarity between the Russian Doll Search method and dynamic programming. Like dynamic programming, Russian Doll Search solves sub-problems in order to solve the whole problem. But, whereas Dynamic Programming directly combines the results obtained on sub-problems to get the result of the whole problem, Russian Doll Search only uses them as bounds during its search.

Bucket elimination edit

The bucket elimination algorithm can be adapted for constraint optimization. A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint. The cost of this new constraint is computed assuming a maximal value for every value of the removed variable. Formally, if   is the variable to be removed,   are the soft constraints containing it, and   are their variables except  , the new soft constraint is defined by:

 

Bucket elimination works with an (arbitrary) ordering of the variables. Every variable is associated a bucket of constraints; the bucket of a variable contains all constraints having the variable has the highest in the order. Bucket elimination proceed from the last variable to the first. For each variable, all constraints of the bucket are replaced as above to remove the variable. The resulting constraint is then placed in the appropriate bucket.

See also edit

References edit

  1. ^ Rossi, Francesca; van Beek, Peter; Walsh, Toby (2006-01-01), Rossi, Francesca; van Beek, Peter; Walsh, Toby (eds.), "Chapter 1 – Introduction", Foundations of Artificial Intelligence, Handbook of Constraint Programming, vol. 2, Elsevier, pp. 3–12, doi:10.1016/s1574-6526(06)80005-2, retrieved 2019-10-04
  2. ^ Martins, J. R. R. A.; Ning, A. (2021). Engineering Design Optimization. Cambridge University Press. ISBN 978-1108833417.
  3. ^ Wenyu Sun; Ya-Xiang Yuan (2010). Optimization Theory and Methods: Nonlinear Programming, Springer, ISBN 978-1441937650. p. 541
  4. ^ Prosser, Mike (1993). "Constrained Optimization by Substitution". Basic Mathematics for Economists. New York: Routledge. pp. 338–346. ISBN 0-415-08424-5.
  5. ^ Leader, Jeffery J. (2004). Numerical Analysis and Scientific Computation. Addison Wesley. ISBN 0-201-73499-0.
  6. ^ Verfaillie, Gérard, Michel Lemaître, and Thomas Schiex. "." AAAI/IAAI, Vol. 1. 1996.

Further reading edit

constrained, optimization, mathematical, optimization, constrained, optimization, some, contexts, called, constraint, optimization, process, optimizing, objective, function, with, respect, some, variables, presence, constraints, those, variables, objective, fu. In mathematical optimization constrained optimization in some contexts called constraint optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables The objective function is either a cost function or energy function which is to be minimized or a reward function or utility function which is to be maximized Constraints can be either hard constraints which set conditions for the variables that are required to be satisfied or soft constraints which have some variable values that are penalized in the objective function if and based on the extent that the conditions on the variables are not satisfied Contents 1 Relation to constraint satisfaction problems 2 General form 3 Solution methods 3 1 Equality constraints 3 1 1 Substitution method 3 1 2 Lagrange multiplier 3 2 Inequality constraints 3 2 1 Linear programming 3 2 2 Nonlinear programming 3 2 3 Quadratic programming 3 2 4 KKT conditions 3 2 5 Branch and bound 3 2 6 First choice bounding functions 3 2 6 1 Russian doll search 3 2 7 Bucket elimination 4 See also 5 References 6 Further readingRelation to constraint satisfaction problems editThe constrained optimization problem COP is a significant generalization of the classic constraint satisfaction problem CSP model 1 COP is a CSP that includes an objective function to be optimized Many algorithms are used to handle the optimization part General form editA general constrained minimization problem may be written as follows 2 min f x s u b j e c t t o g i x c i for i 1 n Equality constraints h j x d j for j 1 m Inequality constraints displaystyle begin array rcll min amp amp f mathbf x amp mathrm subject to amp amp g i mathbf x c i amp text for i 1 ldots n quad text Equality constraints amp amp h j mathbf x geq d j amp text for j 1 ldots m quad text Inequality constraints end array nbsp where g i x c i f o r i 1 n displaystyle g i mathbf x c i mathrm for i 1 ldots n nbsp and h j x d j f o r j 1 m displaystyle h j mathbf x geq d j mathrm for j 1 ldots m nbsp are constraints that are required to be satisfied these are called hard constraints and f x displaystyle f mathbf x nbsp is the objective function that needs to be optimized subject to the constraints In some problems often called constraint optimization problems the objective function is actually the sum of cost functions each of which penalizes the extent if any to which a soft constraint a constraint which is preferred but not required to be satisfied is violated Solution methods editMany constrained optimization algorithms can be adapted to the unconstrained case often via the use of a penalty method However search steps taken by the unconstrained method may be unacceptable for the constrained problem leading to a lack of convergence This is referred to as the Maratos effect 3 Equality constraints edit Substitution method edit For very simple problems say a function of two variables subject to a single equality constraint it is most practical to apply the method of substitution 4 The idea is to substitute the constraint into the objective function to create a composite function that incorporates the effect of the constraint For example assume the objective is to maximize f x y x y displaystyle f x y x cdot y nbsp subject to x y 10 displaystyle x y 10 nbsp The constraint implies y 10 x displaystyle y 10 x nbsp which can be substituted into the objective function to create p x x 10 x 10 x x 2 displaystyle p x x 10 x 10x x 2 nbsp The first order necessary condition gives p x 10 2 x 0 displaystyle frac partial p partial x 10 2x 0 nbsp which can be solved for x 5 displaystyle x 5 nbsp and consequently y 10 5 5 displaystyle y 10 5 5 nbsp Lagrange multiplier edit Main article Lagrange multipliers If the constrained problem has only equality constraints the method of Lagrange multipliers can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints Alternatively if the constraints are all equality constraints and are all linear they can be solved for some of the variables in terms of the others and the former can be substituted out of the objective function leaving an unconstrained problem in a smaller number of variables Inequality constraints edit With inequality constraints the problem can be characterized in terms of the geometric optimality conditions Fritz John conditions and Karush Kuhn Tucker conditions under which simple problems may be solvable Linear programming edit If the objective function and all of the hard constraints are linear and some hard constraints are inequalities then the problem is a linear programming problem This can be solved by the simplex method which usually works in polynomial time in the problem size but is not guaranteed to or by interior point methods which are guaranteed to work in polynomial time Nonlinear programming edit If the objective function or some of the constraints are nonlinear and some constraints are inequalities then the problem is a nonlinear programming problem Quadratic programming edit If all the hard constraints are linear and some are inequalities but the objective function is quadratic the problem is a quadratic programming problem It is one type of nonlinear programming It can still be solved in polynomial time by the ellipsoid method if the objective function is convex otherwise the problem may be NP hard KKT conditions edit Allowing inequality constraints the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers It can be applied under differentiability and convexity Branch and bound edit Constraint optimization can be solved by branch and bound algorithms These are backtracking algorithms storing the cost of the best solution found during execution and using it to avoid part of the search More precisely whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost the algorithm backtracks instead of trying to extend this solution Assuming that cost is to be minimized the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated Indeed if the algorithm can backtrack from a partial solution part of the search is skipped The lower the estimated cost the better the algorithm as a lower estimated cost is more likely to be lower than the best cost of solution found so far On the other hand this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution as otherwise the algorithm could backtrack while a solution better than the best found so far exists As a result the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution and this upper bound should be as small as possible A variation of this approach called Hansen s method uses interval methods 5 It inherently implements rectangular constraints First choice bounding functions edit One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately For each soft constraint the maximal possible value for any assignment to the unassigned variables is assumed The sum of these values is an upper bound because the soft constraints cannot assume a higher value It is exact because the maximal values of soft constraints may derive from different evaluations a soft constraint may be maximal for x a displaystyle x a nbsp while another constraint is maximal for x b displaystyle x b nbsp Russian doll search edit This method 6 runs a branch and bound algorithm on n displaystyle n nbsp problems where n displaystyle n nbsp is the number of variables Each such problem is the subproblem obtained by dropping a sequence of variables x 1 x i displaystyle x 1 ldots x i nbsp from the original problem along with the constraints containing them After the problem on variables x i 1 x n displaystyle x i 1 ldots x n nbsp is solved its optimal cost can be used as an upper bound while solving the other problems In particular the cost estimate of a solution having x i 1 x n displaystyle x i 1 ldots x n nbsp as unassigned variables is added to the cost that derives from the evaluated variables Virtually this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones except that the latter problem has already been solved More precisely the cost of soft constraints containing both assigned and unassigned variables is estimated as above or using an arbitrary other method the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem which is already known at this point There is similarity between the Russian Doll Search method and dynamic programming Like dynamic programming Russian Doll Search solves sub problems in order to solve the whole problem But whereas Dynamic Programming directly combines the results obtained on sub problems to get the result of the whole problem Russian Doll Search only uses them as bounds during its search Bucket elimination edit The bucket elimination algorithm can be adapted for constraint optimization A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint The cost of this new constraint is computed assuming a maximal value for every value of the removed variable Formally if x displaystyle x nbsp is the variable to be removed C 1 C n displaystyle C 1 ldots C n nbsp are the soft constraints containing it and y 1 y m displaystyle y 1 ldots y m nbsp are their variables except x displaystyle x nbsp the new soft constraint is defined by C y 1 a 1 y n a n max a i C i x a y 1 a 1 y n a n displaystyle C y 1 a 1 ldots y n a n max a sum i C i x a y 1 a 1 ldots y n a n nbsp Bucket elimination works with an arbitrary ordering of the variables Every variable is associated a bucket of constraints the bucket of a variable contains all constraints having the variable has the highest in the order Bucket elimination proceed from the last variable to the first For each variable all constraints of the bucket are replaced as above to remove the variable The resulting constraint is then placed in the appropriate bucket See also editConstrained least squares Distributed constraint optimization Constraint satisfaction problem CSP Constraint programming Integer programming Penalty method SuperiorizationReferences edit Rossi Francesca van Beek Peter Walsh Toby 2006 01 01 Rossi Francesca van Beek Peter Walsh Toby eds Chapter 1 Introduction Foundations of Artificial Intelligence Handbook of Constraint Programming vol 2 Elsevier pp 3 12 doi 10 1016 s1574 6526 06 80005 2 retrieved 2019 10 04 Martins J R R A Ning A 2021 Engineering Design Optimization Cambridge University Press ISBN 978 1108833417 Wenyu Sun Ya Xiang Yuan 2010 Optimization Theory and Methods Nonlinear Programming Springer ISBN 978 1441937650 p 541 Prosser Mike 1993 Constrained Optimization by Substitution Basic Mathematics for Economists New York Routledge pp 338 346 ISBN 0 415 08424 5 Leader Jeffery J 2004 Numerical Analysis and Scientific Computation Addison Wesley ISBN 0 201 73499 0 Verfaillie Gerard Michel Lemaitre and Thomas Schiex Russian doll search for solving constraint optimization problems AAAI IAAI Vol 1 1996 Further reading editBertsekas Dimitri P 1982 Constrained Optimization and Lagrange Multiplier Methods New York Academic Press ISBN 0 12 093480 9 Dechter Rina 2003 Constraint Processing Morgan Kaufmann ISBN 1 55860 890 7 Retrieved from https en wikipedia org w index php title Constrained optimization amp oldid 1164887231, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.