Multi-agent distributed consensus optimization problems arise in many signal
processing applications. Recently, the alternating direction method of
multipliers (ADMM) has been used for solving this family of problems. ADMM
based distributed optimization method is shown to have faster convergence rate
compared with classic methods based on consensus subgradient, but can be
computationally expensive, especially for problems with complicated structures
or large dimensions. In this paper, we propose low-complexity algorithms that
can reduce the overall computational cost of consensus ADMM by an order of
magnitude for certain large-scale problems. Central to the proposed algorithms
is the use of an inexact step for each ADMM update, which enables the agents to
perform cheap computation at each iteration. Our convergence analyses show that
the proposed methods converge well under some convexity assumptions. Numerical
results show that the proposed algorithms offer considerably lower
computational complexity than the standard ADMM based distributed optimization
methods.Comment: submitted to IEEE Trans. Signal Processing; Revised April 2014 and
August 201