337 research outputs found
An Accelerated Decentralized Stochastic Proximal Algorithm for Finite Sums
Modern large-scale finite-sum optimization relies on two key aspects:
distribution and stochastic updates. For smooth and strongly convex problems,
existing decentralized algorithms are slower than modern accelerated
variance-reduced stochastic algorithms when run on a single machine, and are
therefore not efficient. Centralized algorithms are fast, but their scaling is
limited by global aggregation steps that result in communication bottlenecks.
In this work, we propose an efficient \textbf{A}ccelerated
\textbf{D}ecentralized stochastic algorithm for \textbf{F}inite \textbf{S}ums
named ADFS, which uses local stochastic proximal updates and randomized
pairwise communications between nodes. On machines, ADFS learns from
samples in the same time it takes optimal algorithms to learn from samples
on one machine. This scaling holds until a critical network size is reached,
which depends on communication delays, on the number of samples , and on the
network topology. We provide a theoretical analysis based on a novel augmented
graph approach combined with a precise evaluation of synchronization times and
an extension of the accelerated proximal coordinate gradient algorithm to
arbitrary sampling. We illustrate the improvement of ADFS over state-of-the-art
decentralized approaches with experiments.Comment: Code available in source files. arXiv admin note: substantial text
overlap with arXiv:1901.0986
Regularized Jacobi iteration for decentralized convex optimization with separable constraints
We consider multi-agent, convex optimization programs subject to separable
constraints, where the constraint function of each agent involves only its
local decision vector, while the decision vectors of all agents are coupled via
a common objective function. We focus on a regularized variant of the so called
Jacobi algorithm for decentralized computation in such problems. We first
consider the case where the objective function is quadratic, and provide a
fixed-point theoretic analysis showing that the algorithm converges to a
minimizer of the centralized problem. Moreover, we quantify the potential
benefits of such an iterative scheme by comparing it against a scaled projected
gradient algorithm. We then consider the general case and show that all limit
points of the proposed iteration are optimal solutions of the centralized
problem. The efficacy of the proposed algorithm is illustrated by applying it
to the problem of optimal charging of electric vehicles, where, as opposed to
earlier approaches, we show convergence to an optimal charging scheme for a
finite, possibly large, number of vehicles
- …