We give a damped proximal augmented Lagrangian method (DPALM) for solving
problems with a weakly-convex objective and convex linear/nonlinear
constraints. Instead of taking a full stepsize, DPALM adopts a damped dual
stepsize to ensure the boundedness of dual iterates. We show that DPALM can
produce a (near) \vareps-KKT point within O(\vareps^{-2}) outer iterations
if each DPALM subproblem is solved to a proper accuracy. In addition, we
establish overall iteration complexity of DPALM when the objective is either a
regularized smooth function or in a regularized compositional form. For the
former case, DPALM achieves the complexity of
O(ε−2.5) to produce an
ε-KKT point by applying an accelerated proximal gradient (APG)
method to each DPALM subproblem. For the latter case, the complexity of DPALM
is O(ε−3) to produce a near
ε-KKT point by using an APG to solve a Moreau-envelope smoothed
version of each subproblem. Our outer iteration complexity and the overall
complexity either generalize existing best ones from unconstrained or
linear-constrained problems to convex-constrained ones, or improve over the
best-known results on solving the same-structured problems. Furthermore,
numerical experiments on linearly/quadratically constrained non-convex
quadratic programs and linear-constrained robust nonlinear least squares are
conducted to demonstrate the empirical efficiency of the proposed DPALM over
several state-of-the art methods.Comment: 27 page