1,132 research outputs found
Semi-Global Exponential Stability of Augmented Primal-Dual Gradient Dynamics for Constrained Convex Optimization
Primal-dual gradient dynamics that find saddle points of a Lagrangian have
been widely employed for handling constrained optimization problems. Building
on existing methods, we extend the augmented primal-dual gradient dynamics
(Aug-PDGD) to incorporate general convex and nonlinear inequality constraints,
and we establish its semi-global exponential stability when the objective
function is strongly convex. We also provide an example of a strongly convex
quadratic program of which the Aug-PDGD fails to achieve global exponential
stability. Numerical simulation also suggests that the exponential convergence
rate could depend on the initial distance to the KKT point
Linear Convergence of Primal-Dual Gradient Methods and their Performance in Distributed Optimization
In this work, we revisit a classical incremental implementation of the
primal-descent dual-ascent gradient method used for the solution of equality
constrained optimization problems. We provide a short proof that establishes
the linear (exponential) convergence of the algorithm for smooth
strongly-convex cost functions and study its relation to the non-incremental
implementation. We also study the effect of the augmented Lagrangian penalty
term on the performance of distributed optimization algorithms for the
minimization of aggregate cost functions over multi-agent networks
Transformed Primal-Dual Methods For Nonlinear Saddle Point Systems
A transformed primal-dual (TPD) flow is developed for a class of nonlinear
smooth saddle point system. The flow for the dual variable contains a Schur
complement which is strongly convex. Exponential stability of the saddle point
is obtained by showing the strong Lyapunov property. Several TPD iterations are
derived by implicit Euler, explicit Euler, and implicit-explicit methods of the
TPD flow. Generalized to the symmetric TPD iterations, linear convergence rate
is preserved for convex-concave saddle point systems under assumptions that the
regularized functions are strongly convex. The effectiveness of augmented
Lagrangian methods can be explained as a regularization of the non-strongly
convexity and a preconditioning for the Schur complement. The algorithm and
convergence analysis depends crucially on appropriate inner products of the
spaces for the primal variable and dual variable. A clear convergence analysis
with nonlinear inexact inner solvers is also developed
- …