2 research outputs found
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
In this paper, we propose a novel stochastic gradient estimator --
ProbAbilistic Gradient Estimator (PAGE) -- for nonconvex optimization. PAGE is
easy to implement as it is designed via a small adjustment to vanilla SGD: in
each iteration, PAGE uses the vanilla minibatch SGD update with probability
or reuses the previous gradient with a small adjustment, at a much lower
computational cost, with probability . We give a simple formula for the
optimal choice of . Moreover, we prove the first tight lower bound
for nonconvex finite-sum problems,
which also leads to a tight lower bound
for nonconvex online problems, where . Then, we show that PAGE obtains the optimal convergence results
(finite-sum) and
(online) matching our lower bounds for both
nonconvex finite-sum and online problems. Besides, we also show that for
nonconvex functions satisfying the Polyak-\L{}ojasiewicz (PL) condition, PAGE
can automatically switch to a faster linear convergence rate . Finally, we conduct several deep learning experiments
(e.g., LeNet, VGG, ResNet) on real datasets in PyTorch showing that PAGE not
only converges much faster than SGD in training but also achieves the higher
test accuracy, validating the optimal theoretical results and confirming the
practical superiority of PAGE.Comment: 25 pages; accepted by ICML 2021 (long talk