8,012 research outputs found
Accelerated Stochastic ADMM with Variance Reduction
Alternating Direction Method of Multipliers (ADMM) is a popular method in
solving Machine Learning problems. Stochastic ADMM was firstly proposed in
order to reduce the per iteration computational complexity, which is more
suitable for big data problems. Recently, variance reduction techniques have
been integrated with stochastic ADMM in order to get a fast convergence rate,
such as SAG-ADMM and SVRG-ADMM,but the convergence is still suboptimal w.r.t
the smoothness constant. In this paper, we propose a new accelerated stochastic
ADMM algorithm with variance reduction, which enjoys a faster convergence than
all the other stochastic ADMM algorithms. We theoretically analyze its
convergence rate and show its dependence on the smoothness constant is optimal.
We also empirically validate its effectiveness and show its priority over other
stochastic ADMM algorithms
Alternating Direction Method of Multipliers for Decomposable Saddle-Point Problems
Saddle-point problems appear in various settings including machine learning,
zero-sum stochastic games, and regression problems. We consider decomposable
saddle-point problems and study an extension of the alternating direction
method of multipliers to such saddle-point problems. Instead of solving the
original saddle-point problem directly, this algorithm solves smaller
saddle-point problems by exploiting the decomposable structure. We show the
convergence of this algorithm for convex-concave saddle-point problems under a
mild assumption. We also provide a sufficient condition for which the
assumption holds. We demonstrate the convergence properties of the saddle-point
alternating direction method of multipliers with numerical examples on a power
allocation problem in communication channels and a network routing problem with
adversarial costs.Comment: Accepted to 58th Annual Allerton Conference on Communication,
Control, and Computin
- …