428 research outputs found
Why neural networks find simple solutions: the many regularizers of geometric complexity
In many contexts, simpler models are preferable to more complex models and
the control of this model complexity is the goal for many methods in machine
learning such as regularization, hyperparameter tuning and architecture design.
In deep learning, it has been difficult to understand the underlying mechanisms
of complexity control, since many traditional measures are not naturally
suitable for deep neural networks. Here we develop the notion of geometric
complexity, which is a measure of the variability of the model function,
computed using a discrete Dirichlet energy. Using a combination of theoretical
arguments and empirical results, we show that many common training heuristics
such as parameter norm regularization, spectral norm regularization, flatness
regularization, implicit gradient regularization, noise regularization and the
choice of parameter initialization all act to control geometric complexity,
providing a unifying framework in which to characterize the behavior of deep
learning models.Comment: Accepted as a NeurIPS 2022 pape
Do Neural Networks Generalize from Self-Averaging Sub-classifiers in the Same Way As Adaptive Boosting?
In recent years, neural networks (NNs) have made giant leaps in a wide
variety of domains. NNs are often referred to as black box algorithms due to
how little we can explain their empirical success. Our foundational research
seeks to explain why neural networks generalize. A recent advancement derived a
mutual information measure for explaining the performance of deep NNs through a
sequence of increasingly complex functions. We show deep NNs learn a series of
boosted classifiers whose generalization is popularly attributed to
self-averaging over an increasing number of interpolating sub-classifiers. To
our knowledge, we are the first authors to establish the connection between
generalization in boosted classifiers and generalization in deep NNs. Our
experimental evidence and theoretical analysis suggest NNs trained with dropout
exhibit similar self-averaging behavior over interpolating sub-classifiers as
cited in popular explanations for the post-interpolation generalization
phenomenon in boosting
- …