Convex optimization




Convex optimization is a subfield of optimization that studies the problem of minimizing convex functions over convex sets. The convexity makes optimization easier than the general case since a local minimum must be a global minimum, and first-order conditions are sufficient conditions for optimality.[1]


Convex minimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design,[2] data analysis and modeling, finance, statistics (optimal experimental design),[3] and structural optimization.[4] With recent improvements in computing and in optimization theory, convex minimization is nearly as straightforward as linear programming. Many optimization problems can be reformulated as convex minimization problems. For example, the problem of maximizing a concave function f can be re-formulated equivalently as a problem of minimizing the function -f, which is convex.




Contents






  • 1 Definition


    • 1.1 Convex optimization problem


    • 1.2 Standard form




  • 2 Theory


  • 3 Examples


  • 4 Lagrange multipliers


  • 5 Methods


  • 6 Convex minimization with good complexity: Self-concordant barriers


  • 7 Quasiconvex minimization


  • 8 Convex maximization


  • 9 Extensions


  • 10 See also


  • 11 Notes


  • 12 References


  • 13 External links





Definition


Given a real vector space X{displaystyle X}X together with a convex, real-valued function defined on a convex subset X{displaystyle {mathcal {X}}}{mathcal {X}} of X{displaystyle X}X


f:X→R;∀x1,x2∈X,∀t∈[0,1]:f(tx1+(1−t)x2)≤tf(x1)+(1−t)f(x2),{displaystyle f:{mathcal {X}}to mathbb {R} ;forall x_{1},x_{2}in {mathcal {X}},forall tin [0,1]:qquad f(tx_{1}+(1-t)x_{2})leq tf(x_{1})+(1-t)f(x_{2}),}{displaystyle f:{mathcal {X}}to mathbb {R} ;forall x_{1},x_{2}in {mathcal {X}},forall tin [0,1]:qquad f(tx_{1}+(1-t)x_{2})leq tf(x_{1})+(1-t)f(x_{2}),}

the problem is to find any point x∗{displaystyle x^{ast }}x^ast in X{displaystyle {mathcal {X}}}{mathcal {X}} for which the number f(x){displaystyle f(x)}f(x) is smallest, i.e., a point x∗{displaystyle x^{ast }}x^ast such that



f(x∗)≤f(x){displaystyle f(x^{ast })leq f(x)}f(x^ast) le f(x) for all x∈X{displaystyle xin {mathcal {X}}}xin {mathcal {X}}.

The convexity of f{displaystyle f}f makes the powerful tools of convex analysis applicable. In finite-dimensional normed spaces, the Hahn–Banach theorem and the existence of subgradients lead to a particularly satisfying theory of necessary and sufficient conditions for optimality, a duality theory generalizing that for linear programming, and effective computational methods.



Convex optimization problem


The general form of an optimization problem (also referred to as a mathematical programming problem or minimization problem) is to find some x∗X{displaystyle x^{ast }in {mathcal {X}}}x^ast in mathcal{X} such that


f(x∗)=min{f(x):x∈X},{displaystyle f(x^{ast })=min{f(x):xin {mathcal {X}}},}f(x^ast) = min {f(x):x in mathcal{X}},

for some feasible set X⊂Rn{displaystyle {mathcal {X}}subset mathbb {R} ^{n}}mathcal{X} subset mathbb{R}^n and objective function f(x):Rn→R{displaystyle f(x):mathbb {R} ^{n}rightarrow mathbb {R} }f(x):mathbb{R}^n rightarrow mathbb{R}. The optimization problem is called a convex optimization problem if X{displaystyle {mathcal {X}}}{mathcal {X}} is a convex set and f(x){displaystyle f(x)}f(x) is a convex function defined on Rn{displaystyle mathbb {R} ^{n}}mathbb {R} ^{n}.
[5][6]


Alternatively, an optimization problem of the form


minimizef(x)subjecttogi(x)≤0,i=1,…,m{displaystyle {begin{aligned}&operatorname {minimize} &&f(x)\&operatorname {subject;to} &&g_{i}(x)leq 0,quad i=1,dots ,mend{aligned}}}begin{align}<br />
&operatorname{minimize}& & f(x) \<br />
&operatorname{subject;to}<br />
& &g_i(x) leq 0, quad i = 1,dots,m<br />
end{align}

is called convex if the functions f,g1…gm:Rn→R{displaystyle f,g_{1}ldots g_{m}:mathbb {R} ^{n}rightarrow mathbb {R} }f, g_1 ldots g_m : mathbb{R}^n rightarrow mathbb{R} are all convex functions.[7]



Standard form


Standard form is the usual and most intuitive form of describing a convex minimization problem. It consists of the following three parts:



  • A convex function f(x):Rn→R{displaystyle f(x):mathbb {R} ^{n}to mathbb {R} }f(x):mathbb {R} ^{n}to mathbb {R} to be minimized over the variable x{displaystyle x}x


  • Inequality constraints of the form gi(x)≤0{displaystyle g_{i}(x)leq 0}g_{i}(x)leq 0, where the functions gi{displaystyle g_{i}}g_{i} are convex


  • Equality constraints of the form hi(x)=0{displaystyle h_{i}(x)=0}h_{i}(x)=0, where the functions hi{displaystyle h_{i}}h_{i} are affine. In practice, the terms "linear" and "affine" are often used interchangeably. Such constraints can be expressed in the form hi(x)=aiTx+bi{displaystyle h_{i}(x)=a_{i}^{T}x+b_{i}}h_i(x) = a_i^T x + b_i, where ai{displaystyle a_{i}}a_{i} is a column-vector, (T{displaystyle T}T indicates Transpose), and bi{displaystyle b_{i}}b_{i} a real number.


A convex minimization problem is thus written as


minimizexf(x)subjecttogi(x)≤0,i=1,…,mhi(x)=0,i=1,…,p.{displaystyle {begin{aligned}&{underset {x}{operatorname {minimize} }}&&f(x)\&operatorname {subject;to} &&g_{i}(x)leq 0,quad i=1,dots ,m\&&&h_{i}(x)=0,quad i=1,dots ,p.end{aligned}}}begin{align}<br />
&underset{x}{operatorname{minimize}}& & f(x) \<br />
&operatorname{subject;to}<br />
& &g_i(x) leq 0, quad i = 1,dots,m \<br />
&&&h_i(x) = 0, quad i = 1, dots,p.<br />
end{align}

Note that every equality constraint h(x)=0{displaystyle h(x)=0}h(x) = 0 can be equivalently replaced by a pair of inequality constraints h(x)≤0{displaystyle h(x)leq 0}h(x)leq 0 and h(x)≤0{displaystyle -h(x)leq 0}-h(x)leq 0. Therefore, for theoretical purposes, equality constraints are redundant; however, it can be beneficial to treat them specially in practice.


Following from this fact, it is easy to understand why hi(x)=0{displaystyle h_{i}(x)=0}h_{i}(x)=0 has to be affine as opposed to merely being convex. If hi(x){displaystyle h_{i}(x)}h_i(x) is convex, hi(x)≤0{displaystyle h_{i}(x)leq 0}h_i(x) leq 0 is convex, but hi(x)≤0{displaystyle -h_{i}(x)leq 0}-h_i(x) leq 0 is concave. Therefore, the only way for hi(x)=0{displaystyle h_{i}(x)=0}h_{i}(x)=0 to be convex is for hi(x){displaystyle h_{i}(x)}h_i(x) to be affine.



Theory


The following statements are true about the convex minimization problem:



  • if a local minimum exists, then it is a global minimum.

  • the set of all (global) minima is convex.

  • for each strictly convex function, if the function has a minimum, then the minimum is unique.


These results are used by the theory of convex minimization along with geometric notions from functional analysis (in Hilbert spaces) such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.



Examples


The following problems are all convex minimization problems, or can be transformed into convex minimizations problems via a change of variables:



  • Least squares

  • Linear programming

  • Convex quadratic minimization with linear constraints

  • Quadratic minimization with convex quadratic constraints

  • Conic optimization

  • Geometric programming

  • Second order cone programming

  • Semidefinite programming


  • Entropy maximization with appropriate constraints



Lagrange multipliers


Consider a convex minimization problem given in standard form by a cost function f(x){displaystyle f(x)}f(x) and inequality constraints gi(x)≤0{displaystyle g_{i}(x)leq 0}g_i(x)leq 0 for 1≤i≤m{displaystyle 1leq ileq m}{displaystyle 1leq ileq m}. Then the domain X{displaystyle {mathcal {X}}}{mathcal {X}} is:


X={x∈X|g1(x),…,gm(x)≤0}.{displaystyle {mathcal {X}}=left{xin Xvert g_{1}(x),ldots ,g_{m}(x)leq 0right}.}{displaystyle {mathcal {X}}=left{xin Xvert g_{1}(x),ldots ,g_{m}(x)leq 0right}.}

The Lagrangian function for the problem is


L(x,λ0,λ1,…m)=λ0f(x)+λ1g1(x)+⋯mgm(x).{displaystyle L(x,lambda _{0},lambda _{1},ldots ,lambda _{m})=lambda _{0}f(x)+lambda _{1}g_{1}(x)+cdots +lambda _{m}g_{m}(x).}{displaystyle L(x,lambda _{0},lambda _{1},ldots ,lambda _{m})=lambda _{0}f(x)+lambda _{1}g_{1}(x)+cdots +lambda _{m}g_{m}(x).}

For each point x{displaystyle x}x in X{displaystyle X}X that minimizes f{displaystyle f}f over X{displaystyle X}X, there exist real numbers λ0,λ1,…m,{displaystyle lambda _{0},lambda _{1},ldots ,lambda _{m},}{displaystyle lambda _{0},lambda _{1},ldots ,lambda _{m},} called Lagrange multipliers, that satisfy these conditions simultaneously:




  1. x{displaystyle x}x minimizes L(y,λ0,λ1,…m){displaystyle L(y,lambda _{0},lambda _{1},ldots ,lambda _{m})}{displaystyle L(y,lambda _{0},lambda _{1},ldots ,lambda _{m})} over all y∈X,{displaystyle yin X,}{displaystyle yin X,}


  2. λ0,λ1,…m≥0,{displaystyle lambda _{0},lambda _{1},ldots ,lambda _{m}geq 0,}{displaystyle lambda _{0},lambda _{1},ldots ,lambda _{m}geq 0,} with at least one λk>0,{displaystyle lambda _{k}>0,}{displaystyle lambda _{k}>0,}


  3. λ1g1(x)=⋯mgm(x)=0{displaystyle lambda _{1}g_{1}(x)=cdots =lambda _{m}g_{m}(x)=0}{displaystyle lambda _{1}g_{1}(x)=cdots =lambda _{m}g_{m}(x)=0} (complementary slackness).


If there exists a "strictly feasible point", that is, a point z{displaystyle z}z satisfying


g1(z),…,gm(z)<0,{displaystyle g_{1}(z),ldots ,g_{m}(z)<0,}{displaystyle g_{1}(z),ldots ,g_{m}(z)<0,}

then the statement above can be strengthened to require that λ0=1{displaystyle lambda _{0}=1}{displaystyle lambda _{0}=1}.


Conversely, if some x{displaystyle x}x in X{displaystyle X}X satisfies (1)–(3) for scalars λ0,…m{displaystyle lambda _{0},ldots ,lambda _{m}}{displaystyle lambda _{0},ldots ,lambda _{m}} with λ0=1{displaystyle lambda _{0}=1}{displaystyle lambda _{0}=1} then x{displaystyle x}x is certain to minimize f{displaystyle f}f over X{displaystyle X}X.



Methods


Convex minimization problems can be solved by the following contemporary methods:[8]



  • "Bundle methods" (Wolfe, Lemaréchal, Kiwiel), and


  • Subgradient projection methods (Polyak),


  • Interior-point methods (Nemirovskii and Nesterov).


Other methods of interest:



  • Cutting-plane methods

  • Ellipsoid method

  • Subgradient method

  • Dual subgradients and the drift-plus-penalty method


Subgradient methods can be implemented simply and so are widely used.[9] Dual subgradient methods are subgradient methods applied to a dual problem. The drift-plus-penalty method is similar to the dual subgradient method, but takes a time average of the primal variables.



Convex minimization with good complexity: Self-concordant barriers


The efficiency of iterative methods is poor for the class of convex problems, because this class includes "bad guys" whose minimum cannot be approximated without a large number of function and subgradient evaluations;[10] thus, to have practically appealing efficiency results, it is necessary to make additional restrictions on the class of problems. Two such classes are problems special barrier functions, first self-concordant barrier functions, according to the theory of Nesterov and Nemirovskii, and second self-regular barrier functions according to the theory of Terlaky and coauthors.



Quasiconvex minimization


Problems with convex level sets can be efficiently minimized, in theory.
Yurii Nesterov proved that quasi-convex minimization problems could be solved efficiently, and his results were extended by Kiwiel.[11] However, such theoretically "efficient" methods use "divergent-series" stepsize rules, which were first developed for classical subgradient methods. Classical subgradient methods using divergent-series rules are much slower than modern methods of convex minimization, such as subgradient projection methods, bundle methods of descent, and nonsmooth filter methods.


Solving even close-to-convex but non-convex problems can be computationally intractable. Minimizing a unimodal function is intractable, regardless of the smoothness of the function, according to results of Ivanov.[12]



Convex maximization


Conventionally, the definition of the convex optimization problem (we recall) requires that the objective function f to be minimized and the feasible set be convex. In the special case of linear programming (LP), the objective function is both concave and convex, and so LP can also consider the problem of maximizing an objective function without confusion. However, for most convex minimization problems, the objective function is not concave, and therefore a problem and then such problems are formulated in the standard form of convex optimization problems, that is, minimizing the convex objective function.


For nonlinear convex minimization, the associated maximization problem obtained by substituting the supremum operator for the infimum operator is not a problem of convex optimization, as conventionally defined. However, it is studied in the larger field of convex optimization as a problem of convex maximization.[13]


The convex maximization problem is especially important for studying the existence of maxima. Consider the restriction of a convex function to a compact convex set: Then, on that set, the function attains its constrained maximum only on the boundary.[14] Such results, called "maximum principles", are useful in the theory of harmonic functions, potential theory, and partial differential equations.


The problem of minimizing a quadratic multivariate polynomial on a cube is NP-hard.[15] In fact, in the quadratic minimization problem, if the matrix has only one negative eigenvalue, is NP-hard.[16]



Extensions


Advanced treatments consider convex functions that can attain positive infinity, also; the indicator function of convex analysis is zero for every x∈X{displaystyle xin {mathcal {X}}}xinmathcal{X} and positive infinity otherwise.


Extensions of convex functions include biconvex, pseudo-convex, and quasi-convex functions. Partial extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of generalized convexity ("abstract convex analysis").



See also



  • Duality

  • Karush–Kuhn–Tucker conditions

  • Optimization problem

  • Proximal gradient method



Notes




  1. ^ Rockafellar, R. Tyrrell (1993). "Lagrange multipliers and optimality" (PDF). SIAM Review. 35 (2): 183–238. doi:10.1137/1035044..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"""""""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}


  2. ^ Boyd/Vandenberghe, p. 17.


  3. ^ Chritensen/Klarbring, chapter 4.


  4. ^ Boyd/Vandenberghe, chapter 7.


  5. ^ Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1996). Convex analysis and minimization algorithms: Fundamentals. p. 291.


  6. ^ Ben-Tal, Aharon; Nemirovskiĭ, Arkadiĭ Semenovich (2001). Lectures on modern convex optimization: analysis, algorithms, and engineering applications. pp. 335–336.


  7. ^ Boyd/Vandenberghe, p. 7


  8. ^ For methods for convex minimization, see the volumes by Hiriart-Urruty and Lemaréchal (bundle) and the textbooks by Ruszczyński, Bertsekas, and
    Boyd and Vandenberghe (interior point).



  9. ^ Bertsekas


  10. ^ Hiriart-Urruty & Lemaréchal (1993, Example XV.1.1.2, p. 277) discuss a "bad guy" constructed by Arkadi Nemirovskii.


  11. ^ In theory, quasiconvex programming and convex programming problems can be solved in reasonable amount of time, where the number of iterations grows like a polynomial in the dimension of the problem (and in the reciprocal of the approximation error tolerated):


    Kiwiel, Krzysztof C. (2001). "Convergence and efficiency of subgradient methods for quasiconvex minimization". Mathematical Programming (Series A). 90 (1). Berlin, Heidelberg: Springer. pp. 1–25. doi:10.1007/PL00011414. ISSN 0025-5610. MR 1819784. Kiwiel acknowledges that Yurii Nesterov first established that quasiconvex minimization problems can be solved efficiently.




  12. ^ Nemirovskii and Judin


  13. ^ Convex maximization is mentioned in the subsection on convex optimization in this textbook:
    Ulrich Faigle, Walter Kern, and George Still. Algorithmic principles of mathematical programming. Springer-Verlag. Texts in Mathematics. Chapter 10.2, Subsection "Convex optimization", pages 205-206.



  14. ^ Theorem 32.1 in Rockafellar's Convex Analysis states this maximum principle for extended real-valued functions.


  15. ^ Sahni, S. "Computationally related problems," in SIAM Journal on Computing, 3, 262--279, 1974.


  16. ^ Quadratic programming with one negative eigenvalue is NP-hard, Panos M. Pardalos and Stephen A. Vavasis in Journal of Global Optimization, Volume 1, Number 1, 1991, pg.15-22.



References




  • Bertsekas, Dimitri P.; Nedic, Angelia; Ozdaglar, Asuman (2003). Convex Analysis and Optimization. Belmont, MA.: Athena Scientific. ISBN 1-886529-45-0.


  • Bertsekas, Dimitri P. (2009). Convex Optimization Theory. Belmont, MA.: Athena Scientific. ISBN 978-1-886529-31-1.


  • Bertsekas, Dimitri P. (2015). Convex Optimization Algorithms. Belmont, MA.: Athena Scientific. ISBN 978-1-886529-28-1.



  • Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (pdf). Cambridge University Press. ISBN 978-0-521-83378-3. Retrieved October 15, 2011.



  • Borwein, Jonathan, and Lewis, Adrian. (2000). Convex Analysis and Nonlinear Optimization. Springer.

  • Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.


  • Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1993). Convex analysis and minimization algorithms, Volume I: Fundamentals. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. 305. Berlin: Springer-Verlag. pp. xviii+417. ISBN 3-540-56850-6. MR 1261420.


  • Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1993). Convex analysis and minimization algorithms, Volume II: Advanced theory and bundle methods. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. 306. Berlin: Springer-Verlag. pp. xviii+346. ISBN 3-540-56852-2. MR 1295240.


  • Kiwiel, Krzysztof C. (1985). Methods of Descent for Nondifferentiable Optimization. Lecture Notes in Mathematics. New York: Springer-Verlag. ISBN 978-3-540-15642-0.


  • Lemaréchal, Claude (2001). "Lagrangian relaxation". In Michael Jünger and Denis Naddef. Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000. Lecture Notes in Computer Science. 2241. Berlin: Springer-Verlag. pp. 112–156. doi:10.1007/3-540-45586-8_4. ISBN 3-540-42877-1. MR 1900016.

  • Nesterov, Y. and Nemirovsky, A. (1994). 'Interior Point Polynomial Methods in Convex Programming. SIAM

  • Nesterov, Yurii. (2004). Introductory Lectures on Convex Optimization, Kluwer Academic Publishers


  • Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press.



  • Ruszczyński, Andrzej (2006). Nonlinear Optimization. Princeton University Press.


  • Christensen, Peter W.; Anders Klarbring (2008). An introduction to structural optimization. 153. Springer Science & Businees Media.


External links







  • Stephen Boyd and Lieven Vandenberghe, Convex optimization (book in pdf)


  • EE364a: Convex Optimization I and EE364b: Convex Optimization II, Stanford course homepages


  • 6.253: Convex Analysis and Optimization, an MIT OCW course homepage

  • Brian Borchers, An overview of software for convex optimization









Popular posts from this blog

澳門輕軌系統

水泉澳邨

Indian Forest Service