377 research outputs found
Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach
Modern neural networks are highly overparameterized, with capacity to
substantially overfit to training data. Nevertheless, these networks often
generalize well in practice. It has also been observed that trained networks
can often be "compressed" to much smaller representations. The purpose of this
paper is to connect these two empirical observations. Our main technical result
is a generalization bound for compressed networks based on the compressed size.
Combined with off-the-shelf compression algorithms, the bound leads to state of
the art generalization guarantees; in particular, we provide the first
non-vacuous generalization guarantees for realistic architectures applied to
the ImageNet classification problem. As additional evidence connecting
compression and generalization, we show that compressibility of models that
tend to overfit is limited: We establish an absolute limit on expected
compressibility as a function of expected generalization error, where the
expectations are over the random choice of training examples. The bounds are
complemented by empirical results that show an increase in overfitting implies
an increase in the number of bits required to describe a trained network.Comment: 16 pages, 1 figure. Accepted at ICLR 201
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Recent works have cast some light on the mystery of why deep nets fit any
data and generalize despite being very overparametrized. This paper analyzes
training and generalization for a simple 2-layer ReLU net with random
initialization, and provides the following improvements over recent works:
(i) Using a tighter characterization of training speed than recent papers, an
explanation for why training a neural net with random labels leads to slower
training, as originally observed in [Zhang et al. ICLR'17].
(ii) Generalization bound independent of network size, using a data-dependent
complexity measure. Our measure distinguishes clearly between random labels and
true labels on MNIST and CIFAR, as shown by experiments. Moreover, recent
papers require sample complexity to increase (slowly) with the size, while our
sample complexity is completely independent of the network size.
(iii) Learnability of a broad class of smooth functions by 2-layer ReLU nets
trained via gradient descent.
The key idea is to track dynamics of training and generalization via
properties of a related kernel.Comment: In ICML 201
Generalisation and expressiveness for over-parameterised neural networks
Over-parameterised modern neural networks owe their success to two fundamental properties: expressive power and generalisation capability. The former refers to the model's ability to fit a large variety of data sets, while the latter enables the network to extrapolate patterns from training examples and apply them to previously unseen data. This thesis addresses a few challenges related to these two key properties.
The fact that over-parameterised networks can fit any data set is not always indicative of their practical expressiveness. This is the object of the first part of this thesis, where we delve into how the input information can get lost when propagating through a deep architecture, and we propose as an easily implementable possible solution the introduction of suitable scaling factors and residual connections.
The second part of this thesis focuses on generalisation. The reason why modern neural networks can generalise well to new data without overfitting, despite being over-parameterised, is an open question that is currently receiving considerable attention in the research community. We explore this subject from information-theoretic and PAC-Bayesian viewpoints, proposing novel learning algorithms and generalisation bounds
Tighter risk certificates for neural networks
This paper presents an empirical study regarding training probabilistic
neural networks using training objectives derived from PAC-Bayes bounds. In the
context of probabilistic neural networks, the output of training is a
probability distribution over network weights. We present two training
objectives, used here for the first time in connection with training neural
networks. These two training objectives are derived from tight PAC-Bayes
bounds. We also re-implement a previously used training objective based on a
classical PAC-Bayes bound, to compare the properties of the predictors learned
using the different training objectives. We compute risk certificates that are
valid on any unseen examples for the learnt predictors. We further experiment
with different types of priors on the weights (both data-free and
data-dependent priors) and neural network architectures. Our experiments on
MNIST and CIFAR-10 show that our training methods produce competitive test set
errors and non-vacuous risk bounds with much tighter values than previous
results in the literature, showing promise not only to guide the learning
algorithm through bounding the risk but also for model selection. These
observations suggest that the methods studied here might be good candidates for
self-certified learning, in the sense of certifying the risk on any unseen data
without the need for data-splitting protocols.Comment: Preprint under revie
- …