6,169 research outputs found
Asynchronous Distributed Semi-Stochastic Gradient Optimization
With the recent proliferation of large-scale learning problems,there have
been a lot of interest on distributed machine learning algorithms, particularly
those that are based on stochastic gradient descent (SGD) and its variants.
However, existing algorithms either suffer from slow convergence due to the
inherent variance of stochastic gradients, or have a fast linear convergence
rate but at the expense of poorer solution quality. In this paper, we combine
their merits by proposing a fast distributed asynchronous SGD-based algorithm
with variance reduction. A constant learning rate can be used, and it is also
guaranteed to converge linearly to the optimal solution. Experiments on the
Google Cloud Computing Platform demonstrate that the proposed algorithm
outperforms state-of-the-art distributed asynchronous algorithms in terms of
both wall clock time and solution quality
A stochastic approximation algorithm for stochastic semidefinite programming
Motivated by applications to multi-antenna wireless networks, we propose a
distributed and asynchronous algorithm for stochastic semidefinite programming.
This algorithm is a stochastic approximation of a continous- time matrix
exponential scheme regularized by the addition of an entropy-like term to the
problem's objective function. We show that the resulting algorithm converges
almost surely to an -approximation of the optimal solution
requiring only an unbiased estimate of the gradient of the problem's stochastic
objective. When applied to throughput maximization in wireless multiple-input
and multiple-output (MIMO) systems, the proposed algorithm retains its
convergence properties under a wide array of mobility impediments such as user
update asynchronicities, random delays and/or ergodically changing channels.
Our theoretical analysis is complemented by extensive numerical simulations
which illustrate the robustness and scalability of the proposed method in
realistic network conditions.Comment: 25 pages, 4 figure
A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
Based on the idea of randomized coordinate descent of -averaged
operators, a randomized primal-dual optimization algorithm is introduced, where
a random subset of coordinates is updated at each iteration. The algorithm
builds upon a variant of a recent (deterministic) algorithm proposed by V\~u
and Condat that includes the well known ADMM as a particular case. The obtained
algorithm is used to solve asynchronously a distributed optimization problem. A
network of agents, each having a separate cost function containing a
differentiable term, seek to find a consensus on the minimum of the aggregate
objective. The method yields an algorithm where at each iteration, a random
subset of agents wake up, update their local estimates, exchange some data with
their neighbors, and go idle. Numerical results demonstrate the attractive
performance of the method. The general approach can be naturally adapted to
other situations where coordinate descent convex optimization algorithms are
used with a random choice of the coordinates.Comment: 10 page
- …