891 research outputs found
Information in Infinite Ensembles of Infinitely-Wide Neural Networks
In this preliminary work, we study the generalization properties of infinite
ensembles of infinitely-wide neural networks. Amazingly, this model family
admits tractable calculations for many information-theoretic quantities. We
report analytical and empirical investigations in the search for signals that
correlate with generalization.Comment: 2nd Symposium on Advances in Approximate Bayesian Inference, 201
Global Convergence of SGD On Two Layer Neural Nets
In this note we demonstrate provable convergence of SGD to the global minima
of appropriately regularized empirical risk of depth nets -- for
arbitrary data and with any number of gates, if they are using adequately
smooth and bounded activations like sigmoid and tanh. We build on the results
in [1] and leverage a constant amount of Frobenius norm regularization on the
weights, along with sampling of the initial weights from an appropriate
distribution. We also give a continuous time SGD convergence result that also
applies to smooth unbounded activations like SoftPlus. Our key idea is to show
the existence loss functions on constant sized neural nets which are "Villani
Functions". [1] Bin Shi, Weijie J. Su, and Michael I. Jordan. On learning rates
and schr\"odinger operators, 2020. arXiv:2004.06977Comment: 23 pages, 6 figures. Extended abstract accepted at DeepMath 2022. v2
update: New experiments added in Section 3.2 to study the effect of the
regularization value. Statement of Theorem 3.4 about SoftPlus nets has been
improve
- β¦