29 research outputs found
Nested Distributed Gradient Methods with Adaptive Quantized Communication
In this paper, we consider minimizing a sum of local convex objective
functions in a distributed setting, where communication can be costly. We
propose and analyze a class of nested distributed gradient methods with
adaptive quantized communication (NEAR-DGD+Q). We show the effect of performing
multiple quantized communication steps on the rate of convergence and on the
size of the neighborhood of convergence, and prove R-Linear convergence to the
exact solution with increasing number of consensus steps and adaptive
quantization. We test the performance of the method, as well as some practical
variants, on quadratic functions, and show the effects of multiple quantized
communication steps in terms of iterations/gradient evaluations, communication
and cost.Comment: 9 pages, 2 figures. arXiv admin note: text overlap with
arXiv:1709.0299
Multi-consensus Decentralized Accelerated Gradient Descent
This paper considers the decentralized optimization problem, which has
applications in large scale machine learning, sensor networks, and control
theory. We propose a novel algorithm that can achieve near optimal
communication complexity, matching the known lower bound up to a logarithmic
factor of the condition number of the problem. Our theoretical results give
affirmative answers to the open problem on whether there exists an algorithm
that can achieve a communication complexity (nearly) matching the lower bound
depending on the global condition number instead of the local one. Moreover,
the proposed algorithm achieves the optimal computation complexity matching the
lower bound up to universal constants. Furthermore, to achieve a linear
convergence rate, our algorithm \emph{doesn't} require the individual functions
to be (strongly) convex. Our method relies on a novel combination of known
techniques including Nesterov's accelerated gradient descent, multi-consensus
and gradient-tracking. The analysis is new, and may be applied to other related
problems. Empirical studies demonstrate the effectiveness of our method for
machine learning applications
Asynchronous Distributed Optimization over Lossy Networks via Relaxed ADMM: Stability and Linear Convergence
In this work we focus on the problem of minimizing the sum of convex cost
functions in a distributed fashion over a peer-to-peer network. In particular,
we are interested in the case in which communications between nodes are prone
to failures and the agents are not synchronized among themselves. We address
the problem proposing a modified version of the relaxed ADMM, which corresponds
to the Peaceman-Rachford splitting method applied to the dual. By exploiting
results from operator theory, we are able to prove the almost sure convergence
of the proposed algorithm under general assumptions on the distribution of
communication loss and node activation events. By further assuming the cost
functions to be strongly convex, we prove the linear convergence of the
algorithm in mean to a neighborhood of the optimal solution, and provide an
upper bound to the convergence rate. Finally, we present numerical results
testing the proposed method in different scenarios.Comment: To appear in IEEE Transactions on Automatic Contro