2,021 research outputs found
A unified variance-reduced accelerated gradient method for convex optimization
We propose a novel randomized incremental gradient algorithm, namely,
VAriance-Reduced Accelerated Gradient (Varag), for finite-sum optimization.
Equipped with a unified step-size policy that adjusts itself to the value of
the condition number, Varag exhibits the unified optimal rates of convergence
for solving smooth convex finite-sum problems directly regardless of their
strong convexity. Moreover, Varag is the first accelerated randomized
incremental gradient method that benefits from the strong convexity of the
data-fidelity term to achieve the optimal linear convergence. It also
establishes an optimal linear rate of convergence for solving a wide class of
problems only satisfying a certain error bound condition rather than strong
convexity. Varag can also be extended to solve stochastic finite-sum problems.Comment: 33rd Conference on Neural Information Processing Systems (NeurIPS
2019
- …