Traditional analyses of gradient descent show that when the largest
eigenvalue of the Hessian, also known as the sharpness S(θ), is bounded
by 2/η, training is "stable" and the training loss decreases
monotonically. Recent works, however, have observed that this assumption does
not hold when training modern neural networks with full batch or large batch
gradient descent. Most recently, Cohen et al. (2021) observed two important
phenomena. The first, dubbed progressive sharpening, is that the sharpness
steadily increases throughout training until it reaches the instability cutoff
2/η. The second, dubbed edge of stability, is that the sharpness hovers at
2/η for the remainder of training while the loss continues decreasing,
albeit non-monotonically.
We demonstrate that, far from being chaotic, the dynamics of gradient descent
at the edge of stability can be captured by a cubic Taylor expansion: as the
iterates diverge in direction of the top eigenvector of the Hessian due to
instability, the cubic term in the local Taylor expansion of the loss function
causes the curvature to decrease until stability is restored. This property,
which we call self-stabilization, is a general property of gradient descent and
explains its behavior at the edge of stability. A key consequence of
self-stabilization is that gradient descent at the edge of stability implicitly
follows projected gradient descent (PGD) under the constraint S(θ)≤2/η. Our analysis provides precise predictions for the loss, sharpness, and
deviation from the PGD trajectory throughout training, which we verify both
empirically in a number of standard settings and theoretically under mild
conditions. Our analysis uncovers the mechanism for gradient descent's implicit
bias towards stability.Comment: First two authors contributed equall