4,316 research outputs found

    Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

    Full text link
    Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and non-smooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primal-dual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primal-dual algorithms both for solving large-scale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness

    On the Global Linear Convergence of the ADMM with Multi-Block Variables

    Full text link
    The alternating direction method of multipliers (ADMM) has been widely used for solving structured convex optimization problems. In particular, the ADMM can solve convex programs that minimize the sum of NN convex functions with NN-block variables linked by some linear constraints. While the convergence of the ADMM for N=2N=2 was well established in the literature, it remained an open problem for a long time whether or not the ADMM for N≥3N \ge 3 is still convergent. Recently, it was shown in [3] that without further conditions the ADMM for N≥3N\ge 3 may actually fail to converge. In this paper, we show that under some easily verifiable and reasonable conditions the global linear convergence of the ADMM when N≥3N\geq 3 can still be assured, which is important since the ADMM is a popular method for solving large scale multi-block optimization models and is known to perform very well in practice even when N≥3N\ge 3. Our study aims to offer an explanation for this phenomenon

    A Parametric Non-Convex Decomposition Algorithm for Real-Time and Distributed NMPC

    Get PDF
    A novel decomposition scheme to solve parametric non-convex programs as they arise in Nonlinear Model Predictive Control (NMPC) is presented. It consists of a fixed number of alternating proximal gradient steps and a dual update per time step. Hence, the proposed approach is attractive in a real-time distributed context. Assuming that the Nonlinear Program (NLP) is semi-algebraic and that its critical points are strongly regular, contraction of the sequence of primal-dual iterates is proven, implying stability of the sub-optimality error, under some mild assumptions. Moreover, it is shown that the performance of the optimality-tracking scheme can be enhanced via a continuation technique. The efficacy of the proposed decomposition method is demonstrated by solving a centralised NMPC problem to control a DC motor and a distributed NMPC program for collaborative tracking of unicycles, both within a real-time framework. Furthermore, an analysis of the sub-optimality error as a function of the sampling period is proposed given a fixed computational power.Comment: 16 pages, 9 figure

    Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations

    Full text link
    It is well-known that any sum of squares (SOS) program can be cast as a semidefinite program (SDP) of a particular structure and that therein lies the computational bottleneck for SOS programs, as the SDPs generated by this procedure are large and costly to solve when the polynomials involved in the SOS programs have a large number of variables and degree. In this paper, we review SOS optimization techniques and present two new methods for improving their computational efficiency. The first method leverages the sparsity of the underlying SDP to obtain computational speed-ups. Further improvements can be obtained if the coefficients of the polynomials that describe the problem have a particular sparsity pattern, called chordal sparsity. The second method bypasses semidefinite programming altogether and relies instead on solving a sequence of more tractable convex programs, namely linear and second order cone programs. This opens up the question as to how well one can approximate the cone of SOS polynomials by second order representable cones. In the last part of the paper, we present some recent negative results related to this question.Comment: Tutorial for CDC 201
    • …
    corecore