1,515 research outputs found

    Convergence of the Lasserre Hierarchy of SDP Relaxations for Convex Polynomial Programs without Compactness

    Full text link
    The Lasserre hierarchy of semidefinite programming (SDP) relaxations is an effective scheme for finding computationally feasible SDP approximations of polynomial optimization over compact semi-algebraic sets. In this paper, we show that, for convex polynomial optimization, the Lasserre hierarchy with a slightly extended quadratic module always converges asymptotically even in the face of non-compact semi-algebraic feasible sets. We do this by exploiting a coercivity property of convex polynomials that are bounded below. We further establish that the positive definiteness of the Hessian of the associated Lagrangian at a saddle-point (rather than the objective function at each minimizer) guarantees finite convergence of the hierarchy. We obtain finite convergence by first establishing a new sum-of-squares polynomial representation of convex polynomials over convex semi-algebraic sets under a saddle-point condition. We finally prove that the existence of a saddle-point of the Lagrangian for a convex polynomial program is also necessary for the hierarchy to have finite convergence.Comment: 17 page

    Primal Recovery from Consensus-Based Dual Decomposition for Distributed Convex Optimization

    Get PDF
    Dual decomposition has been successfully employed in a variety of distributed convex optimization problems solved by a network of computing and communicating nodes. Often, when the cost function is separable but the constraints are coupled, the dual decomposition scheme involves local parallel subgradient calculations and a global subgradient update performed by a master node. In this paper, we propose a consensus-based dual decomposition to remove the need for such a master node and still enable the computing nodes to generate an approximate dual solution for the underlying convex optimization problem. In addition, we provide a primal recovery mechanism to allow the nodes to have access to approximate near-optimal primal solutions. Our scheme is based on a constant stepsize choice and the dual and primal objective convergence are achieved up to a bounded error floor dependent on the stepsize and on the number of consensus steps among the nodes

    A Primal-Dual Algorithmic Framework for Constrained Convex Minimization

    Get PDF
    We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.Comment: This paper consists of 54 pages with 7 tables and 12 figure

    Global Solutions to Nonconvex Optimization of 4th-Order Polynomial and Log-Sum-Exp Functions

    Full text link
    This paper presents a canonical dual approach for solving a nonconvex global optimization problem governed by a sum of fourth-order polynomial and a log-sum-exp function. Such a problem arises extensively in engineering and sciences. Based on the canonical duality-triality theory, this nonconvex problem is transformed to an equivalent dual problem, which can be solved easily under certain conditions. We proved that both global minimizer and the biggest local extrema of the primal problem can be obtained analytically from the canonical dual solutions. As two special cases, a quartic polynomial minimization and a minimax problem are discussed. Existence conditions are derived, which can be used to classify easy and relative hard instances. Applications are illustrated by several nonconvex and nonsmooth examples

    A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization

    Get PDF
    We propose a new first-order primal-dual optimization framework for a convex optimization template with broad applications. Our optimization algorithms feature optimal convergence guarantees under a variety of common structure assumptions on the problem template. Our analysis relies on a novel combination of three classic ideas applied to the primal-dual gap function: smoothing, acceleration, and homotopy. The algorithms due to the new approach achieve the best known convergence rate results, in particular when the template consists of only non-smooth functions. We also outline a restart strategy for the acceleration to significantly enhance the practical performance. We demonstrate relations with the augmented Lagrangian method and show how to exploit the strongly convex objectives with rigorous convergence rate guarantees. We provide numerical evidence with two examples and illustrate that the new methods can outperform the state-of-the-art, including Chambolle-Pock, and the alternating direction method-of-multipliers algorithms.Comment: 35 pages, accepted for publication on SIAM J. Optimization. Tech. Report, Oct. 2015 (last update Sept. 2016
    • …
    corecore