1 research outputs found
Accelerating Rescaled Gradient Descent: Fast Optimization of Smooth Functions
We present a family of algorithms, called descent algorithms, for optimizing
convex and non-convex functions. We also introduce a new first-order algorithm,
called rescaled gradient descent (RGD), and show that RGD achieves a faster
convergence rate than gradient descent provided the function is strongly smooth
-- a natural generalization of the standard smoothness assumption on the
objective function. When the objective function is convex, we present two novel
frameworks for "accelerating" descent methods, one in the style of Nesterov and
the other in the style of Monteiro and Svaiter, using a single Lyapunov.
Rescaled gradient descent can be accelerated under the same strong smoothness
assumption using both frameworks. We provide several examples of strongly
smooth loss functions in machine learning and numerical experiments that verify
our theoretical findings. We also present several extensions of our novel
Lyapunov framework, including deriving optimal universal tensor methods and
extending our framework to the coordinate setting