313 research outputs found
Random deep neural networks are biased towards simple functions
We prove that the binary classifiers of bit strings generated by random wide
deep neural networks with ReLU activation function are biased towards simple
functions. The simplicity is captured by the following two properties. For any
given input bit string, the average Hamming distance of the closest input bit
string with a different classification is at least sqrt(n / (2{\pi} log n)),
where n is the length of the string. Moreover, if the bits of the initial
string are flipped randomly, the average number of flips required to change the
classification grows linearly with n. These results are confirmed by numerical
experiments on deep neural networks with two hidden layers, and settle the
conjecture stating that random deep neural networks are biased towards simple
functions. This conjecture was proposed and numerically explored in [Valle
P\'erez et al., ICLR 2019] to explain the unreasonably good generalization
properties of deep learning algorithms. The probability distribution of the
functions generated by random deep neural networks is a good choice for the
prior probability distribution in the PAC-Bayesian generalization bounds. Our
results constitute a fundamental step forward in the characterization of this
distribution, therefore contributing to the understanding of the generalization
properties of deep learning algorithms
IST Austria Thesis
Deep learning is best known for its empirical success across a wide range of applications
spanning computer vision, natural language processing and speech. Of equal significance,
though perhaps less known, are its ramifications for learning theory: deep networks have
been observed to perform surprisingly well in the high-capacity regime, aka the overfitting
or underspecified regime. Classically, this regime on the far right of the bias-variance curve
is associated with poor generalisation; however, recent experiments with deep networks
challenge this view.
This thesis is devoted to investigating various aspects of underspecification in deep learning.
First, we argue that deep learning models are underspecified on two levels: a) any given
training dataset can be fit by many different functions, and b) any given function can be
expressed by many different parameter configurations. We refer to the second kind of
underspecification as parameterisation redundancy and we precisely characterise its extent.
Second, we characterise the implicit criteria (the inductive bias) that guide learning in the
underspecified regime. Specifically, we consider a nonlinear but tractable classification
setting, and show that given the choice, neural networks learn classifiers with a large margin.
Third, we consider learning scenarios where the inductive bias is not by itself sufficient to
deal with underspecification. We then study different ways of ‘tightening the specification’: i)
In the setting of representation learning with variational autoencoders, we propose a hand-
crafted regulariser based on mutual information. ii) In the setting of binary classification, we
consider soft-label (real-valued) supervision. We derive a generalisation bound for linear
networks supervised in this way and verify that soft labels facilitate fast learning. Finally, we
explore an application of soft-label supervision to the training of multi-exit models
- …