2,045 research outputs found
Understanding Modern Techniques in Optimization: Frank-Wolfe, Nesterov's Momentum, and Polyak's Momentum
In the first part of this dissertation research, we develop a modular
framework that can serve as a recipe for constructing and analyzing iterative
algorithms for convex optimization. Specifically, our work casts optimization
as iteratively playing a two-player zero-sum game. Many existing optimization
algorithms including Frank-Wolfe and Nesterov's acceleration methods can be
recovered from the game by pitting two online learners with appropriate
strategies against each other. Furthermore, the sum of the weighted average
regrets of the players in the game implies the convergence rate. As a result,
our approach provides simple alternative proofs to these algorithms. Moreover,
we demonstrate that our approach of optimization as iteratively playing a game
leads to three new fast Frank-Wolfe-like algorithms for some constraint sets,
which further shows that our framework is indeed generic, modular, and
easy-to-use.
In the second part, we develop a modular analysis of provable acceleration
via Polyak's momentum for certain problems, which include solving the classical
strongly quadratic convex problems, training a wide ReLU network under the
neural tangent kernel regime, and training a deep linear network with an
orthogonal initialization. We develop a meta theorem and show that when
applying Polyak's momentum for these problems, the induced dynamics exhibit a
form where we can directly apply our meta theorem.
In the last part of the dissertation, we show another advantage of the use of
Polyak's momentum -- it facilitates fast saddle point escape in smooth
non-convex optimization. This result, together with those of the second part,
sheds new light on Polyak's momentum in modern non-convex optimization and deep
learning.Comment: PhD dissertation at Georgia Tech. arXiv admin note: text overlap with
arXiv:2010.0161
Continuized Acceleration for Quasar Convex Functions in Non-Convex Optimization
Quasar convexity is a condition that allows some first-order methods to
efficiently minimize a function even when the optimization landscape is
non-convex. Previous works develop near-optimal accelerated algorithms for
minimizing this class of functions, however, they require a subroutine of
binary search which results in multiple calls to gradient evaluations in each
iteration, and consequently the total number of gradient evaluations does not
match a known lower bound. In this work, we show that a recently proposed
continuized Nesterov acceleration can be applied to minimizing quasar convex
functions and achieves the optimal bound with a high probability. Furthermore,
we find that the objective functions of training generalized linear models
(GLMs) satisfy quasar convexity, which broadens the applicability of the
relevant algorithms, while known practical examples of quasar convexity in
non-convex learning are sparse in the literature. We also show that if a smooth
and one-point strongly convex, Polyak-Lojasiewicz, or quadratic-growth function
satisfies quasar convexity, then attaining an accelerated linear rate for
minimizing the function is possible under certain conditions, while
acceleration is not known in general for these classes of functions.Comment: Accepted at ICLR (International Conference on Learning
Representations), 202
- …