3,200 research outputs found
On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems
Nonconvex and structured optimization problems arise in many engineering
applications that demand scalable and distributed solution methods. The study
of the convergence properties of these methods is in general difficult due to
the nonconvexity of the problem. In this paper, two distributed solution
methods that combine the fast convergence properties of augmented
Lagrangian-based methods with the separability properties of alternating
optimization are investigated. The first method is adapted from the classic
quadratic penalty function method and is called the Alternating Direction
Penalty Method (ADPM). Unlike the original quadratic penalty function method,
in which single-step optimizations are adopted, ADPM uses an alternating
optimization, which in turn makes it scalable. The second method is the
well-known Alternating Direction Method of Multipliers (ADMM). It is shown that
ADPM for nonconvex problems asymptotically converges to a primal feasible point
under mild conditions and an additional condition ensuring that it
asymptotically reaches the standard first order necessary conditions for local
optimality are introduced. In the case of the ADMM, novel sufficient conditions
under which the algorithm asymptotically reaches the standard first order
necessary conditions are established. Based on this, complete convergence of
ADMM for a class of low dimensional problems are characterized. Finally, the
results are illustrated by applying ADPM and ADMM to a nonconvex localization
problem in wireless sensor networks.Comment: 13 pages, 6 figure
Distributed Basis Pursuit
We propose a distributed algorithm for solving the optimization problem Basis
Pursuit (BP). BP finds the least L1-norm solution of the underdetermined linear
system Ax = b and is used, for example, in compressed sensing for
reconstruction. Our algorithm solves BP on a distributed platform such as a
sensor network, and is designed to minimize the communication between nodes.
The algorithm only requires the network to be connected, has no notion of a
central processing node, and no node has access to the entire matrix A at any
time. We consider two scenarios in which either the columns or the rows of A
are distributed among the compute nodes. Our algorithm, named D-ADMM, is a
decentralized implementation of the alternating direction method of
multipliers. We show through numerical simulation that our algorithm requires
considerably less communications between the nodes than the state-of-the-art
algorithms.Comment: Preprint of the journal version of the paper; IEEE Transactions on
Signal Processing, Vol. 60, Issue 4, April, 201
- …