2 research outputs found
Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes
The infinitely wide neural network has been proven a useful and manageable mathematical model that enables the understanding of many phenomena appearing in deep learning. One example is the convergence of random deep networks to Gaussian processes that enables a rigorous analysis of the way the choice of activation function and network weights impacts the training dynamics. In this paper, we extend the seminal proof of Matthews et al., 2018 to a larger class of initial weight distributions (which we call pseudo-iid), including the established cases of iid and orthogonal weights, as well as the emerging low-rank and structured sparse settings celebrated for their computational speed-up benefits. We show that fully-connected and convolutional networks initialised with pseudo-iid distributions are all effectively equivalent up to their variance. Using our results, one can identify the Edge of Chaos for a broader class of neural networks and tune them at criticality in order to enhance their training. Moreover, they enable the posterior distribution of Bayesian Neural Networks to be tractable across these various initialization schemes
Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes
The infinitely wide neural network has been proven a useful and manageable
mathematical model that enables the understanding of many phenomena appearing
in deep learning. One example is the convergence of random deep networks to
Gaussian processes that allows a rigorous analysis of the way the choice of
activation function and network weights impacts the training dynamics. In this
paper, we extend the seminal proof of Matthews et al. (2018) to a larger class
of initial weight distributions (which we call PSEUDO-IID), including the
established cases of IID and orthogonal weights, as well as the emerging
low-rank and structured sparse settings celebrated for their computational
speed-up benefits. We show that fully-connected and convolutional networks
initialized with PSEUDO-IID distributions are all effectively equivalent up to
their variance. Using our results, one can identify the Edge-of-Chaos for a
broader class of neural networks and tune them at criticality in order to
enhance their training. Moreover, they enable the posterior distribution of
Bayesian Neural Networks to be tractable across these various initialization
schemes