12 research outputs found

    Convergence of the Lasserre Hierarchy of SDP Relaxations for Convex Polynomial Programs without Compactness

    Full text link
    The Lasserre hierarchy of semidefinite programming (SDP) relaxations is an effective scheme for finding computationally feasible SDP approximations of polynomial optimization over compact semi-algebraic sets. In this paper, we show that, for convex polynomial optimization, the Lasserre hierarchy with a slightly extended quadratic module always converges asymptotically even in the face of non-compact semi-algebraic feasible sets. We do this by exploiting a coercivity property of convex polynomials that are bounded below. We further establish that the positive definiteness of the Hessian of the associated Lagrangian at a saddle-point (rather than the objective function at each minimizer) guarantees finite convergence of the hierarchy. We obtain finite convergence by first establishing a new sum-of-squares polynomial representation of convex polynomials over convex semi-algebraic sets under a saddle-point condition. We finally prove that the existence of a saddle-point of the Lagrangian for a convex polynomial program is also necessary for the hierarchy to have finite convergence.Comment: 17 page

    SOS-convex Semi-algebraic Programs and its Applications to Robust Optimization: A Tractable Class of Nonsmooth Convex Optimization

    Get PDF
    In this paper, we introduce a new class of nonsmooth convex functions called SOS-convex semialgebraic functions extending the recently proposed notion of SOS-convex polynomials. This class of nonsmooth convex functions covers many common nonsmooth functions arising in the applications such as the Euclidean norm, the maximum eigenvalue function and the least squares functions with 1\ell_1-regularization or elastic net regularization used in statistics and compressed sensing. We show that, under commonly used strict feasibility conditions, the optimal value and an optimal solution of SOS-convex semi-algebraic programs can be found by solving a single semi-definite programming problem (SDP). We achieve the results by using tools from semi-algebraic geometry, convex-concave minimax theorem and a recently established Jensen inequality type result for SOS-convex polynomials. As an application, we outline how the derived results can be applied to show that robust SOS-convex optimization problems under restricted spectrahedron data uncertainty enjoy exact SDP relaxations. This extends the existing exact SDP relaxation result for restricted ellipsoidal data uncertainty and answers the open questions left in [Optimization Letters 9, 1-18(2015)] on how to recover a robust solution from the semi-definite programming relaxation in this broader setting

    Radius of robust feasibility formulas for classes of convex programs with uncertain polynomial constraints

    Get PDF
    The radius of robust feasibility of a convex program with uncertain constraints gives a value for the maximal ‘size’ of an uncertainty set under which robust feasibility can be guaranteed. This paper provides an upper bound for the radius for convex programs with uncertain convex polynomial constraints and exact formulas for convex programs with SOS-convex polynomial constraints (or convex quadratic constraints) under affine data uncertainty. These exact formulas allow the radius to be computed by commonly available software.The first author would like to thank the University of New South Wales for its support during his stay in November/December 2014. This research was partially supported by the Australian Research Council, Discovery Project DP120100467, the MINECO of Spain and FEDER of EU, Grant MTM2014-59179-C2-1-P

    On the solution existence and stability of polynomial optimization problems

    Full text link
    This paper introduces and investigates a regularity condition in the asymptotic sense for the optimization problems whose objective functions are polynomial. We prove two sufficient conditions for the existence of solutions for polynomial optimization problems. Further, when the constraint sets are semi-algebraic, we show results on the stability of the solution map of polynomial optimization problems. At the end of the paper, we discuss the genericity of the regularity condition.Comment: The old title: A regularity condition in polynomial optimizatio

    A bounded degree SOS hierarchy for polynomial optimization

    Full text link
    We consider a new hierarchy of semidefinite relaxations for the general polynomial optimization problem (P):f=min{f(x):xK}(P):\:f^{\ast}=\min \{\,f(x):x\in K\,\} on a compact basic semi-algebraic set KRnK\subset\R^n. This hierarchy combines some advantages of the standard LP-relaxations associated with Krivine's positivity certificate and some advantages of the standard SOS-hierarchy. In particular it has the following attractive features: (a) In contrast to the standard SOS-hierarchy, for each relaxation in the hierarchy, the size of the matrix associated with the semidefinite constraint is the same and fixed in advance by the user. (b) In contrast to the LP-hierarchy, finite convergence occurs at the first step of the hierarchy for an important class of convex problems. Finally (c) some important techniques related to the use of point evaluations for declaring a polynomial to be zero and to the use of rank-one matrices make an efficient implementation possible. Preliminary results on a sample of non convex problems are encouraging

    From error bounds to the complexity of first-order descent methods for convex functions

    Get PDF
    This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-\L ojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex functions having a moderately flat profile near the set of minimizers (as those of functions with H\"olderian growth). A counterexample shows that the equivalence is no longer true for extremely flat functions. This fact reveals the relevance of an approach based on KL inequality. In a second stage, we show how KL inequalities can in turn be employed to compute new complexity bounds for a wealth of descent methods for convex problems. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence in the spirit of the famous majorant method of Kantorovich. Our result applies to a very simple abstract scheme that covers a wide class of descent methods. As a byproduct of our study, we also provide new results for the globalization of KL inequalities in the convex framework. Our main results inaugurate a simple methodology: derive an error bound, compute the desingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Our method is illustrated through projection methods for feasibility problems, and through the famous iterative shrinkage thresholding algorithm (ISTA), for which we show that the complexity bound is of the form O(qk)O(q^{k}) where the constituents of the bound only depend on error bound constants obtained for an arbitrary least squares objective with 1\ell^1 regularization
    corecore