12,847 research outputs found
ADMM-Tracking Gradient for Distributed Optimization over Asynchronous and Unreliable Networks
In this paper, we propose (i) a novel distributed algorithm for consensus
optimization over networks and (ii) a robust extension tailored to deal with
asynchronous agents and packet losses. The key idea is to achieve dynamic
consensus on (i) the agents' average and (ii) the global descent direction by
iteratively solving an online auxiliary optimization problem through a
distributed implementation of the Alternating Direction Method of Multipliers
(ADMM). Such a mechanism is suitably interlaced with a local proportional
action steering each agent estimate to the solution of the original consensus
optimization problem. First, in the case of ideal networks, by using tools from
system theory, we prove the linear convergence of the scheme with strongly
convex costs. Then, by exploiting the averaging theory, we extend such a first
result to prove that the robust extension of our method preserves linear
convergence in the case of asynchronous agents and packet losses. Further, by
using the notion of Input-to-State Stability, we also guarantee the robustness
of the schemes with respect to additional, generic errors affecting the agents'
updates. Finally, some numerical simulations confirm our theoretical findings
and show that the proposed methods outperform the existing state-of-the-art
distributed methods for consensus optimization
Asynchronous Distributed Averaging on Communication Networks
Distributed algorithms for averaging have attracted interest in the control and sensing literature. However, previous works have not addressed some practical concerns that will arise in actual implementations on packet-switched communication networks such as the Internet. In this paper, we present several implementable algorithms that are robust to asynchronism and dynamic topology changes. The algorithms are completely distributed and do not require any global coordination. In addition, they can be proven to converge under very general asynchronous timing assumptions. Our results are verified by both simulation and experiments on Planetlab, a real-world TCP/IP network. We also present some extensions that are likely to be useful in applications
A Partition-Based Implementation of the Relaxed ADMM for Distributed Convex Optimization over Lossy Networks
In this paper we propose a distributed implementation of the relaxed
Alternating Direction Method of Multipliers algorithm (R-ADMM) for optimization
of a separable convex cost function, whose terms are stored by a set of
interacting agents, one for each agent. Specifically the local cost stored by
each node is in general a function of both the state of the node and the states
of its neighbors, a framework that we refer to as `partition-based'
optimization. This framework presents a great flexibility and can be adapted to
a large number of different applications. We show that the partition-based
R-ADMM algorithm we introduce is linked to the relaxed Peaceman-Rachford
Splitting (R-PRS) operator which, historically, has been introduced in the
literature to find the zeros of sum of functions. Interestingly, making use of
non expansive operator theory, the proposed algorithm is shown to be provably
robust against random packet losses that might occur in the communication
between neighboring nodes. Finally, the effectiveness of the proposed algorithm
is confirmed by a set of compelling numerical simulations run over random
geometric graphs subject to i.i.d. random packet losses.Comment: Full version of the paper to be presented at Conference on Decision
and Control (CDC) 201
- …