14 research outputs found
On the linear convergence of the stochastic gradient method with constant step-size
The strong growth condition (SGC) is known to be a sufficient condition for
linear convergence of the stochastic gradient method using a constant step-size
(SGM-CS). In this paper, we provide a necessary condition, for the
linear convergence of SGM-CS, that is weaker than SGC. Moreover, when this
necessary is violated up to a additive perturbation , we show that both
the projected stochastic gradient method using a constant step-size (PSGM-CS)
and the proximal stochastic gradient method exhibit linear convergence to a
noise dominated region, whose distance to the optimal solution is proportional
to
Stochastic forward-backward and primal-dual approximation algorithms with application to online image restoration
Stochastic approximation techniques have been used in various contexts in
data science. We propose a stochastic version of the forward-backward algorithm
for minimizing the sum of two convex functions, one of which is not necessarily
smooth. Our framework can handle stochastic approximations of the gradient of
the smooth function and allows for stochastic errors in the evaluation of the
proximity operator of the nonsmooth function. The almost sure convergence of
the iterates generated by the algorithm to a minimizer is established under
relatively mild assumptions. We also propose a stochastic version of a popular
primal-dual proximal splitting algorithm, establish its convergence, and apply
it to an online image restoration problem.Comment: 5 Figure