2 research outputs found
Spherical Perspective on Learning with Batch Norm
Batch Normalization (BN) is a prominent deep learning technique. In spite of
its apparent simplicity, its implications over optimization are yet to be fully
understood. In this paper, we study the optimization of neural networks with BN
layers from a geometric perspective. We leverage the radial invariance of
groups of parameters, such as neurons for multi-layer perceptrons or filters
for convolutional neural networks, and translate several popular optimization
schemes on the unit hypersphere. This formulation and the associated
geometric interpretation sheds new light on the training dynamics and the
relation between different optimization schemes. In particular, we use it to
derive the effective learning rate of Adam and stochastic gradient descent
(SGD) with momentum, and we show that in the presence of BN layers, performing
SGD alone is actually equivalent to a variant of Adam constrained to the unit
hypersphere. Our analysis also leads us to introduce new variants of Adam. We
empirically show, over a variety of datasets and architectures, that they
improve accuracy in classification tasks. The complete source code for our
experiments is available at: https://github.com/ymontmarin/adamsr
Inductive Bias of Gradient Descent for Exponentially Weight Normalized Smooth Homogeneous Neural Nets
We analyze the inductive bias of gradient descent for weight normalized
smooth homogeneous neural nets, when trained on exponential or cross-entropy
loss. Our analysis focuses on exponential weight normalization (EWN), which
encourages weight updates along the radial direction. This paper shows that the
gradient flow path with EWN is equivalent to gradient flow on standard networks
with an adaptive learning rate, and hence causes the weights to be updated in a
way that prefers asymptotic relative sparsity. These results can be extended to
hold for gradient descent via an appropriate adaptive learning rate. The
asymptotic convergence rate of the loss in this setting is given by
, and is independent of the depth of the
network. We contrast these results with the inductive bias of standard weight
normalization (SWN) and unnormalized architectures, and demonstrate their
implications on synthetic data sets.Experimental results on simple data sets
and architectures support our claim on sparse EWN solutions, even with SGD.
This demonstrates its potential applications in learning prunable neural
networks.Comment: We have modified proposition 3, removing the extra assumptions,
resulting in a slightly less sharp instability result. We have also added a
figure showing the norm of the weights for SWN, EWN and NWN for the MNIST
training procedure (Appendix N, Figure 11). A few more references that use
SWN have been added to page 3. We have also fixed a few typos and grammatical
error