23 research outputs found
Explaining the Adaptive Generalisation Gap
We conjecture that the inherent difference in generalisation between adaptive
and non-adaptive gradient methods stems from the increased estimation noise in
the flattest directions of the true loss surface. We demonstrate that typical
schedules used for adaptive methods (with low numerical stability or damping
constants) serve to bias relative movement towards flat directions relative to
sharp directions, effectively amplifying the noise-to-signal ratio and harming
generalisation. We further demonstrate that the numerical stability/damping
constant used in these methods can be decomposed into a learning rate reduction
and linear shrinkage of the estimated curvature matrix. We then demonstrate
significant generalisation improvements by increasing the shrinkage
coefficient, closing the generalisation gap entirely in both Logistic
Regression and Deep Neural Network experiments. Finally, we show that other
popular modifications to adaptive methods, such as decoupled weight decay and
partial adaptivity can be shown to calibrate parameter updates to make better
use of sharper, more reliable directions
On the Theoretical Properties of Noise Correlation in Stochastic Optimization
Studying the properties of stochastic noise to optimize complex non-convex
functions has been an active area of research in the field of machine learning.
Prior work has shown that the noise of stochastic gradient descent improves
optimization by overcoming undesirable obstacles in the landscape. Moreover,
injecting artificial Gaussian noise has become a popular idea to quickly escape
saddle points. Indeed, in the absence of reliable gradient information, the
noise is used to explore the landscape, but it is unclear what type of noise is
optimal in terms of exploration ability. In order to narrow this gap in our
knowledge, we study a general type of continuous-time non-Markovian process,
based on fractional Brownian motion, that allows for the increments of the
process to be correlated. This generalizes processes based on Brownian motion,
such as the Ornstein-Uhlenbeck process. We demonstrate how to discretize such
processes which gives rise to the new algorithm fPGD. This method is a
generalization of the known algorithms PGD and Anti-PGD. We study the
properties of fPGD both theoretically and empirically, demonstrating that it
possesses exploration abilities that, in some cases, are favorable over PGD and
Anti-PGD. These results open the field to novel ways to exploit noise for
training machine learning models