Batch Normalization Preconditioning for Neural Network Training

Abstract

Batch normalization (BN) is a popular and ubiquitous method in deep learning that has been shown to decrease training time and improve generalization performance of neural networks. Despite its success, BN is not theoretically well understood. It is not suitable for use with very small mini-batch sizes or online learning. In this work, we propose a new method called Batch Normalization Preconditioning (BNP). Instead of applying normalization explicitly through a batch normalization layer as is done in BN, BNP applies normalization by conditioning the parameter gradients directly during training. This is designed to improve the Hessian matrix of the loss function and hence convergence during training. One benefit is that BNP is not constrained on the mini-batch size and works in the online learning setting. We also extend this technique to Bayesian neural networks which are networks that have probability distributions corresponding to the weights and biases instead of single fixed values. In particular, we apply BNP to stochastic gradient Langevin dynamics (SGLD), which is a standard sampling technique for uncertainty estimation in Bayesian neural networks

    Similar works