3 research outputs found
A Partially Inexact Alternating Direction Method of Multipliers and its Iteration-Complexity Analysis
This paper proposes a partially inexact alternating direction method of
multipliers for computing approximate solution of a linearly constrained convex
optimization problem. This method allows its first subproblem to be solved
inexactly using a relative approximate criterion, whereas a proximal term is
added to its second subproblem in order to simplify it. A stepsize parameter is
included in the updating rule of the Lagrangian multiplier to improve its
computational performance.
Pointwise and ergodic interation-complexity bounds for the proposed method
are established. To the best of our knowledge, this is the first time that
complexity results for an inexact ADMM with relative error criteria has been
analyzed.
Some preliminary numerical experiments are reported to illustrate the
advantages of the new method
An inexact version of the symmetric proximal ADMM for solving separable convex optimization
In this paper, we propose and analyze an inexact version of the symmetric
proximal alternating direction method of multipliers (ADMM) for solving
linearly constrained optimization problems. Basically, the method allows its
first subproblem to be solved inexactly in such way that a relative approximate
criterion is satisfied. In terms of the iteration number , we establish
global pointwise and ergodic
convergence rates of the method for a domain of the acceleration parameters,
which is consistent with the largest known one in the exact case. Since the
symmetric proximal ADMM can be seen as a class of ADMM variants, the new
algorithm as well as its convergence rates generalize, in particular, many
others in the literature. Numerical experiments illustrating the practical
advantages of the method are reported. To the best of our knowledge, this work
is the first one to study an inexact version of the symmetric proximal ADMM
Fast and Stable Nonconvex Constrained Distributed Optimization: The ELLADA Algorithm
Distributed optimization, where the computations are performed in a localized
and coordinated manner using multiple agents, is a promising approach for
solving large-scale optimization problems, e.g., those arising in model
predictive control (MPC) of large-scale plants. However, a distributed
optimization algorithm that is computationally efficient, globally convergent,
amenable to nonconvex constraints and general inter-subsystem interactions
remains an open problem. In this paper, we combine three important
modifications to the classical alternating direction method of multipliers
(ADMM) for distributed optimization. Specifically, (i) an extra-layer
architecture is adopted to accommodate nonconvexity and handle inequality
constraints, (ii) equality-constrained nonlinear programming (NLP) problems are
allowed to be solved approximately, and (iii) a modified Anderson acceleration
is employed for reducing the number of iterations. Theoretical convergence
towards stationary solutions and computational complexity of the proposed
algorithm, named ELLADA, is established. Its application to distributed
nonlinear MPC is also described and illustrated through a benchmark process
system.Comment: 18 pages, 5 figure