3,721 research outputs found
Semi-Global Exponential Stability of Augmented Primal-Dual Gradient Dynamics for Constrained Convex Optimization
Primal-dual gradient dynamics that find saddle points of a Lagrangian have
been widely employed for handling constrained optimization problems. Building
on existing methods, we extend the augmented primal-dual gradient dynamics
(Aug-PDGD) to incorporate general convex and nonlinear inequality constraints,
and we establish its semi-global exponential stability when the objective
function is strongly convex. We also provide an example of a strongly convex
quadratic program of which the Aug-PDGD fails to achieve global exponential
stability. Numerical simulation also suggests that the exponential convergence
rate could depend on the initial distance to the KKT point
Linear Convergence of Primal-Dual Gradient Methods and their Performance in Distributed Optimization
In this work, we revisit a classical incremental implementation of the
primal-descent dual-ascent gradient method used for the solution of equality
constrained optimization problems. We provide a short proof that establishes
the linear (exponential) convergence of the algorithm for smooth
strongly-convex cost functions and study its relation to the non-incremental
implementation. We also study the effect of the augmented Lagrangian penalty
term on the performance of distributed optimization algorithms for the
minimization of aggregate cost functions over multi-agent networks
A Primal-Dual Algorithmic Framework for Constrained Convex Minimization
We present a primal-dual algorithmic framework to obtain approximate
solutions to a prototypical constrained convex optimization problem, and
rigorously characterize how common structural assumptions affect the numerical
efficiency. Our main analysis technique provides a fresh perspective on
Nesterov's excessive gap technique in a structured fashion and unifies it with
smoothing and primal-dual methods. For instance, through the choices of a dual
smoothing strategy and a center point, our framework subsumes decomposition
algorithms, augmented Lagrangian as well as the alternating direction
method-of-multipliers methods as its special cases, and provides optimal
convergence rates on the primal objective residual as well as the primal
feasibility gap of the iterates for all.Comment: This paper consists of 54 pages with 7 tables and 12 figure
A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
We propose a new first-order primal-dual optimization framework for a convex
optimization template with broad applications. Our optimization algorithms
feature optimal convergence guarantees under a variety of common structure
assumptions on the problem template. Our analysis relies on a novel combination
of three classic ideas applied to the primal-dual gap function: smoothing,
acceleration, and homotopy. The algorithms due to the new approach achieve the
best known convergence rate results, in particular when the template consists
of only non-smooth functions. We also outline a restart strategy for the
acceleration to significantly enhance the practical performance. We demonstrate
relations with the augmented Lagrangian method and show how to exploit the
strongly convex objectives with rigorous convergence rate guarantees. We
provide numerical evidence with two examples and illustrate that the new
methods can outperform the state-of-the-art, including Chambolle-Pock, and the
alternating direction method-of-multipliers algorithms.Comment: 35 pages, accepted for publication on SIAM J. Optimization. Tech.
Report, Oct. 2015 (last update Sept. 2016
- …