1 research outputs found
Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems
Learning rate schedule can significantly affect generalization performance in
modern neural networks, but the reasons for this are not yet understood.
Li-Wei-Ma (2019) recently proved this behavior can exist in a simplified
non-convex neural-network setting. In this note, we show that this phenomenon
can exist even for convex learning problems -- in particular, linear regression
in 2 dimensions.
We give a toy convex problem where learning rate annealing (large initial
learning rate, followed by small learning rate) can lead gradient descent to
minima with provably better generalization than using a small learning rate
throughout. In our case, this occurs due to a combination of the mismatch
between the test and train loss landscapes, and early-stopping.Comment: 4 pages plus appendi