4,721 research outputs found

    A Parametric Non-Convex Decomposition Algorithm for Real-Time and Distributed NMPC

    Get PDF
    A novel decomposition scheme to solve parametric non-convex programs as they arise in Nonlinear Model Predictive Control (NMPC) is presented. It consists of a fixed number of alternating proximal gradient steps and a dual update per time step. Hence, the proposed approach is attractive in a real-time distributed context. Assuming that the Nonlinear Program (NLP) is semi-algebraic and that its critical points are strongly regular, contraction of the sequence of primal-dual iterates is proven, implying stability of the sub-optimality error, under some mild assumptions. Moreover, it is shown that the performance of the optimality-tracking scheme can be enhanced via a continuation technique. The efficacy of the proposed decomposition method is demonstrated by solving a centralised NMPC problem to control a DC motor and a distributed NMPC program for collaborative tracking of unicycles, both within a real-time framework. Furthermore, an analysis of the sub-optimality error as a function of the sampling period is proposed given a fixed computational power.Comment: 16 pages, 9 figure

    Fast Proximal Linearized Alternating Direction Method of Multiplier with Parallel Splitting

    Full text link
    The Augmented Lagragian Method (ALM) and Alternating Direction Method of Multiplier (ADMM) have been powerful optimization methods for general convex programming subject to linear constraint. We consider the convex problem whose objective consists of a smooth part and a nonsmooth but simple part. We propose the Fast Proximal Augmented Lagragian Method (Fast PALM) which achieves the convergence rate O(1/K2)O(1/K^2), compared with O(1/K)O(1/K) by the traditional PALM. In order to further reduce the per-iteration complexity and handle the multi-blocks problem, we propose the Fast Proximal ADMM with Parallel Splitting (Fast PL-ADMM-PS) method. It also partially improves the rate related to the smooth part of the objective function. Experimental results on both synthesized and real world data demonstrate that our fast methods significantly improve the previous PALM and ADMM.Comment: AAAI 201

    Asynchronous Distributed Optimization over Lossy Networks via Relaxed ADMM: Stability and Linear Convergence

    Full text link
    In this work we focus on the problem of minimizing the sum of convex cost functions in a distributed fashion over a peer-to-peer network. In particular, we are interested in the case in which communications between nodes are prone to failures and the agents are not synchronized among themselves. We address the problem proposing a modified version of the relaxed ADMM, which corresponds to the Peaceman-Rachford splitting method applied to the dual. By exploiting results from operator theory, we are able to prove the almost sure convergence of the proposed algorithm under general assumptions on the distribution of communication loss and node activation events. By further assuming the cost functions to be strongly convex, we prove the linear convergence of the algorithm in mean to a neighborhood of the optimal solution, and provide an upper bound to the convergence rate. Finally, we present numerical results testing the proposed method in different scenarios.Comment: To appear in IEEE Transactions on Automatic Contro

    A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization

    Get PDF
    We propose a new first-order primal-dual optimization framework for a convex optimization template with broad applications. Our optimization algorithms feature optimal convergence guarantees under a variety of common structure assumptions on the problem template. Our analysis relies on a novel combination of three classic ideas applied to the primal-dual gap function: smoothing, acceleration, and homotopy. The algorithms due to the new approach achieve the best known convergence rate results, in particular when the template consists of only non-smooth functions. We also outline a restart strategy for the acceleration to significantly enhance the practical performance. We demonstrate relations with the augmented Lagrangian method and show how to exploit the strongly convex objectives with rigorous convergence rate guarantees. We provide numerical evidence with two examples and illustrate that the new methods can outperform the state-of-the-art, including Chambolle-Pock, and the alternating direction method-of-multipliers algorithms.Comment: 35 pages, accepted for publication on SIAM J. Optimization. Tech. Report, Oct. 2015 (last update Sept. 2016

    A Primal-Dual Algorithmic Framework for Constrained Convex Minimization

    Get PDF
    We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.Comment: This paper consists of 54 pages with 7 tables and 12 figure

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)→min⁥u{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or ℓ1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
    • 

    corecore