1,351 research outputs found
Asynchronous Distributed Optimization over Lossy Networks via Relaxed ADMM: Stability and Linear Convergence
In this work we focus on the problem of minimizing the sum of convex cost
functions in a distributed fashion over a peer-to-peer network. In particular,
we are interested in the case in which communications between nodes are prone
to failures and the agents are not synchronized among themselves. We address
the problem proposing a modified version of the relaxed ADMM, which corresponds
to the Peaceman-Rachford splitting method applied to the dual. By exploiting
results from operator theory, we are able to prove the almost sure convergence
of the proposed algorithm under general assumptions on the distribution of
communication loss and node activation events. By further assuming the cost
functions to be strongly convex, we prove the linear convergence of the
algorithm in mean to a neighborhood of the optimal solution, and provide an
upper bound to the convergence rate. Finally, we present numerical results
testing the proposed method in different scenarios.Comment: To appear in IEEE Transactions on Automatic Contro
Nested Distributed Gradient Methods with Adaptive Quantized Communication
In this paper, we consider minimizing a sum of local convex objective
functions in a distributed setting, where communication can be costly. We
propose and analyze a class of nested distributed gradient methods with
adaptive quantized communication (NEAR-DGD+Q). We show the effect of performing
multiple quantized communication steps on the rate of convergence and on the
size of the neighborhood of convergence, and prove R-Linear convergence to the
exact solution with increasing number of consensus steps and adaptive
quantization. We test the performance of the method, as well as some practical
variants, on quadratic functions, and show the effects of multiple quantized
communication steps in terms of iterations/gradient evaluations, communication
and cost.Comment: 9 pages, 2 figures. arXiv admin note: text overlap with
arXiv:1709.0299
Newton-Raphson Consensus for Distributed Convex Optimization
We address the problem of distributed uncon- strained convex optimization
under separability assumptions, i.e., the framework where each agent of a
network is endowed with a local private multidimensional convex cost, is
subject to communication constraints, and wants to collaborate to compute the
minimizer of the sum of the local costs. We propose a design methodology that
combines average consensus algorithms and separation of time-scales ideas. This
strategy is proved, under suitable hypotheses, to be globally convergent to the
true minimizer. Intuitively, the procedure lets the agents distributedly
compute and sequentially update an approximated Newton- Raphson direction by
means of suitable average consensus ratios. We show with numerical simulations
that the speed of convergence of this strategy is comparable with alternative
optimization strategies such as the Alternating Direction Method of
Multipliers. Finally, we propose some alternative strategies which trade-off
communication and computational requirements with convergence speed.Comment: 18 pages, preprint with proof
- …