373 research outputs found
On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
In decentralized consensus optimization, a connected network of agents
collaboratively minimize the sum of their local objective functions over a
common decision variable, where their information exchange is restricted
between the neighbors. To this end, one can first obtain a problem
reformulation and then apply the alternating direction method of multipliers
(ADMM). The method applies iterative computation at the individual agents and
information exchange between the neighbors. This approach has been observed to
converge quickly and deemed powerful. This paper establishes its linear
convergence rate for decentralized consensus optimization problem with strongly
convex local objective functions. The theoretical convergence rate is
explicitly given in terms of the network topology, the properties of local
objective functions, and the algorithm parameter. This result is not only a
performance guarantee but also a guideline toward accelerating the ADMM
convergence.Comment: 11 figures, IEEE Transactions on Signal Processing, 201
Quantized Consensus ADMM for Multi-Agent Distributed Optimization
Multi-agent distributed optimization over a network minimizes a global
objective formed by a sum of local convex functions using only local
computation and communication. We develop and analyze a quantized distributed
algorithm based on the alternating direction method of multipliers (ADMM) when
inter-agent communications are subject to finite capacity and other practical
constraints. While existing quantized ADMM approaches only work for quadratic
local objectives, the proposed algorithm can deal with more general objective
functions (possibly non-smooth) including the LASSO. Under certain convexity
assumptions, our algorithm converges to a consensus within
iterations, where depends on the local
objectives and the network topology, and is a polynomial determined by
the quantization resolution, the distance between initial and optimal variable
values, the local objective functions and the network topology. A tight upper
bound on the consensus error is also obtained which does not depend on the size
of the network.Comment: 30 pages, 4 figures; to be submitted to IEEE Trans. Signal
Processing. arXiv admin note: text overlap with arXiv:1307.5561 by other
author
On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems
Nonconvex and structured optimization problems arise in many engineering
applications that demand scalable and distributed solution methods. The study
of the convergence properties of these methods is in general difficult due to
the nonconvexity of the problem. In this paper, two distributed solution
methods that combine the fast convergence properties of augmented
Lagrangian-based methods with the separability properties of alternating
optimization are investigated. The first method is adapted from the classic
quadratic penalty function method and is called the Alternating Direction
Penalty Method (ADPM). Unlike the original quadratic penalty function method,
in which single-step optimizations are adopted, ADPM uses an alternating
optimization, which in turn makes it scalable. The second method is the
well-known Alternating Direction Method of Multipliers (ADMM). It is shown that
ADPM for nonconvex problems asymptotically converges to a primal feasible point
under mild conditions and an additional condition ensuring that it
asymptotically reaches the standard first order necessary conditions for local
optimality are introduced. In the case of the ADMM, novel sufficient conditions
under which the algorithm asymptotically reaches the standard first order
necessary conditions are established. Based on this, complete convergence of
ADMM for a class of low dimensional problems are characterized. Finally, the
results are illustrated by applying ADPM and ADMM to a nonconvex localization
problem in wireless sensor networks.Comment: 13 pages, 6 figure
- …