Deep learning experiments by Cohen et al. [2021] using deterministic Gradient
Descent (GD) revealed an Edge of Stability (EoS) phase when learning rate (LR)
and sharpness (i.e., the largest eigenvalue of Hessian) no longer behave as in
traditional optimization. Sharpness stabilizes around 2/LR and loss goes up
and down across iterations, yet still with an overall downward trend. The
current paper mathematically analyzes a new mechanism of implicit
regularization in the EoS phase, whereby GD updates due to non-smooth loss
landscape turn out to evolve along some deterministic flow on the manifold of
minimum loss. This is in contrast to many previous results about implicit bias
either relying on infinitesimal updates or noise in gradient. Formally, for any
smooth function L with certain regularity condition, this effect is
demonstrated for (1) Normalized GD, i.e., GD with a varying LR Ξ·tβ=β₯βL(x(t))β₯Ξ·β and loss L; (2) GD with constant LR and
loss LβminxβL(x)β. Both provably enter the Edge of Stability, with
the associated flow on the manifold minimizing Ξ»1β(β2L). The
above theoretical results have been corroborated by an experimental study.Comment: 63 pages. This paper has been accepted for conference proceedings in
the 39th International Conference on Machine Learning (ICML), 202