911 research outputs found
A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
Based on the idea of randomized coordinate descent of -averaged
operators, a randomized primal-dual optimization algorithm is introduced, where
a random subset of coordinates is updated at each iteration. The algorithm
builds upon a variant of a recent (deterministic) algorithm proposed by V\~u
and Condat that includes the well known ADMM as a particular case. The obtained
algorithm is used to solve asynchronously a distributed optimization problem. A
network of agents, each having a separate cost function containing a
differentiable term, seek to find a consensus on the minimum of the aggregate
objective. The method yields an algorithm where at each iteration, a random
subset of agents wake up, update their local estimates, exchange some data with
their neighbors, and go idle. Numerical results demonstrate the attractive
performance of the method. The general approach can be naturally adapted to
other situations where coordinate descent convex optimization algorithms are
used with a random choice of the coordinates.Comment: 10 page
A Class of Randomized Primal-Dual Algorithms for Distributed Optimization
Based on a preconditioned version of the randomized block-coordinate
forward-backward algorithm recently proposed in [Combettes,Pesquet,2014],
several variants of block-coordinate primal-dual algorithms are designed in
order to solve a wide array of monotone inclusion problems. These methods rely
on a sweep of blocks of variables which are activated at each iteration
according to a random rule, and they allow stochastic errors in the evaluation
of the involved operators. Then, this framework is employed to derive
block-coordinate primal-dual proximal algorithms for solving composite convex
variational problems. The resulting algorithm implementations may be useful for
reducing computational complexity and memory requirements. Furthermore, we show
that the proposed approach can be used to develop novel asynchronous
distributed primal-dual algorithms in a multi-agent context
SCOPE: Scalable Composite Optimization for Learning on Spark
Many machine learning models, such as logistic regression~(LR) and support
vector machine~(SVM), can be formulated as composite optimization problems.
Recently, many distributed stochastic optimization~(DSO) methods have been
proposed to solve the large-scale composite optimization problems, which have
shown better performance than traditional batch methods. However, most of these
DSO methods are not scalable enough. In this paper, we propose a novel DSO
method, called \underline{s}calable \underline{c}omposite
\underline{op}timization for l\underline{e}arning~({SCOPE}), and implement it
on the fault-tolerant distributed platform \mbox{Spark}. SCOPE is both
computation-efficient and communication-efficient. Theoretical analysis shows
that SCOPE is convergent with linear convergence rate when the objective
function is convex. Furthermore, empirical results on real datasets show that
SCOPE can outperform other state-of-the-art distributed learning methods on
Spark, including both batch learning methods and DSO methods
- …