340 research outputs found
Within-layer Diversity Reduces Generalization Gap
Neural networks are composed of multiple layers arranged in a hierarchical
structure jointly trained with a gradient-based optimization, where the errors
are back-propagated from the last layer back to the first one. At each
optimization step, neurons at a given layer receive feedback from neurons
belonging to higher layers of the hierarchy. In this paper, we propose to
complement this traditional 'between-layer' feedback with additional
'within-layer' feedback to encourage diversity of the activations within the
same layer. To this end, we measure the pairwise similarity between the outputs
of the neurons and use it to model the layer's overall diversity. By penalizing
similarities and promoting diversity, we encourage each neuron to learn a
distinctive representation and, thus, to enrich the data representation learned
within the layer and to increase the total capacity of the model. We
theoretically study how the within-layer activation diversity affects the
generalization performance of a neural network and prove that increasing the
diversity of hidden activations reduces the estimation error. In addition to
the theoretical guarantees, we present an empirical study on three datasets
confirming that the proposed approach enhances the performance of
state-of-the-art neural network models and decreases the generalization gap.Comment: 18 pages, 1 figure, 3 Table
Generalization in Deep Learning
This paper provides theoretical insights into why and how deep learning can
generalize well, despite its large capacity, complexity, possible algorithmic
instability, nonrobustness, and sharp minima, responding to an open question in
the literature. We also discuss approaches to provide non-vacuous
generalization guarantees for deep learning. Based on theoretical observations,
we propose new open problems and discuss the limitations of our results.Comment: To appear in Mathematics of Deep Learning, Cambridge University
Press. All previous results remain unchange
- …