4,983 research outputs found
A Kernel Perspective for Regularizing Deep Neural Networks
We propose a new point of view for regularizing deep neural networks by using
the norm of a reproducing kernel Hilbert space (RKHS). Even though this norm
cannot be computed, it admits upper and lower approximations leading to various
practical strategies. Specifically, this perspective (i) provides a common
umbrella for many existing regularization principles, including spectral norm
and gradient penalties, or adversarial training, (ii) leads to new effective
regularization penalties, and (iii) suggests hybrid strategies combining lower
and upper bounds to get better approximations of the RKHS norm. We
experimentally show this approach to be effective when learning on small
datasets, or to obtain adversarially robust models.Comment: ICM
Classification and Geometry of General Perceptual Manifolds
Perceptual manifolds arise when a neural population responds to an ensemble
of sensory signals associated with different physical features (e.g.,
orientation, pose, scale, location, and intensity) of the same perceptual
object. Object recognition and discrimination requires classifying the
manifolds in a manner that is insensitive to variability within a manifold. How
neuronal systems give rise to invariant object classification and recognition
is a fundamental problem in brain theory as well as in machine learning. Here
we study the ability of a readout network to classify objects from their
perceptual manifold representations. We develop a statistical mechanical theory
for the linear classification of manifolds with arbitrary geometry revealing a
remarkable relation to the mathematics of conic decomposition. Novel
geometrical measures of manifold radius and manifold dimension are introduced
which can explain the classification capacity for manifolds of various
geometries. The general theory is demonstrated on a number of representative
manifolds, including L2 ellipsoids prototypical of strictly convex manifolds,
L1 balls representing polytopes consisting of finite sample points, and
orientation manifolds which arise from neurons tuned to respond to a continuous
angle variable, such as object orientation. The effects of label sparsity on
the classification capacity of manifolds are elucidated, revealing a scaling
relation between label sparsity and manifold radius. Theoretical predictions
are corroborated by numerical simulations using recently developed algorithms
to compute maximum margin solutions for manifold dichotomies. Our theory and
its extensions provide a powerful and rich framework for applying statistical
mechanics of linear classification to data arising from neuronal responses to
object stimuli, as well as to artificial deep networks trained for object
recognition tasks.Comment: 24 pages, 12 figures, Supplementary Material
Generalization Error in Deep Learning
Deep learning models have lately shown great performance in various fields
such as computer vision, speech recognition, speech translation, and natural
language processing. However, alongside their state-of-the-art performance, it
is still generally unclear what is the source of their generalization ability.
Thus, an important question is what makes deep neural networks able to
generalize well from the training set to new data. In this article, we provide
an overview of the existing theory and bounds for the characterization of the
generalization error of deep neural networks, combining both classical and more
recent theoretical and empirical results
- …