767 research outputs found
Convergence Analysis of Accelerated Stochastic Gradient Descent under the Growth Condition
We study the convergence of accelerated stochastic gradient descent for
strongly convex objectives under the growth condition, which states that the
variance of stochastic gradient is bounded by a multiplicative part that grows
with the full gradient, and a constant additive part. Through the lens of the
growth condition, we investigate four widely used accelerated methods:
Nesterov's accelerated method (NAM), robust momentum method (RMM), accelerated
dual averaging method (ADAM), and implicit ADAM (iADAM). While these methods
are known to improve the convergence rate of SGD under the condition that the
stochastic gradient has bounded variance, it is not well understood how their
convergence rates are affected by the multiplicative noise. In this paper, we
show that these methods all converge to a neighborhood of the optimum with
accelerated convergence rates (compared to SGD) even under the growth
condition. In particular, NAM, RMM, iADAM enjoy acceleration only with a mild
multiplicative noise, while ADAM enjoys acceleration even with a large
multiplicative noise. Furthermore, we propose a generic tail-averaged scheme
that allows the accelerated rates of ADAM and iADAM to nearly attain the
theoretical lower bound (up to a logarithmic factor in the variance term)
Second-Order Stochastic Optimization for Machine Learning in Linear Time
First-order stochastic methods are the state-of-the-art in large-scale
machine learning optimization owing to efficient per-iteration complexity.
Second-order methods, while able to provide faster convergence, have been much
less explored due to the high cost of computing the second-order information.
In this paper we develop second-order stochastic methods for optimization
problems in machine learning that match the per-iteration cost of gradient
based methods, and in certain settings improve upon the overall running time
over popular first-order methods. Furthermore, our algorithm has the desirable
property of being implementable in time linear in the sparsity of the input
data
- …