19 research outputs found
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
Conventional wisdom in deep learning states that increasing depth improves
expressiveness but complicates optimization. This paper suggests that,
sometimes, increasing depth can speed up optimization. The effect of depth on
optimization is decoupled from expressiveness by focusing on settings where
additional layers amount to overparameterization - linear neural networks, a
well-studied model. Theoretical analysis, as well as experiments, show that
here depth acts as a preconditioner which may accelerate convergence. Even on
simple convex problems such as linear regression with loss, ,
gradient descent can benefit from transitioning to a non-convex
overparameterized objective, more than it would from some common acceleration
schemes. We also prove that it is mathematically impossible to obtain the
acceleration effect of overparametrization via gradients of any regularizer.Comment: Published at the International Conference on Machine Learning (ICML)
201
Emergence of Invariance and Disentanglement in Deep Representations
Using established principles from Statistics and Information Theory, we show
that invariance to nuisance factors in a deep neural network is equivalent to
information minimality of the learned representation, and that stacking layers
and injecting noise during training naturally bias the network towards learning
invariant representations. We then decompose the cross-entropy loss used during
training and highlight the presence of an inherent overfitting term. We propose
regularizing the loss by bounding such a term in two equivalent ways: One with
a Kullbach-Leibler term, which relates to a PAC-Bayes perspective; the other
using the information in the weights as a measure of complexity of a learned
model, yielding a novel Information Bottleneck for the weights. Finally, we
show that invariance and independence of the components of the representation
learned by the network are bounded above and below by the information in the
weights, and therefore are implicitly optimized during training. The theory
enables us to quantify and predict sharp phase transitions between underfitting
and overfitting of random labels when using our regularized loss, which we
verify in experiments, and sheds light on the relation between the geometry of
the loss function, invariance properties of the learned representation, and
generalization error.Comment: Deep learning, neural network, representation, flat minima,
information bottleneck, overfitting, generalization, sufficiency, minimality,
sensitivity, information complexity, stochastic gradient descent,
regularization, total correlation, PAC-Baye
Approximation and Non-parametric Estimation of ResNet-type Convolutional Neural Networks
Convolutional neural networks (CNNs) have been shown to achieve optimal
approximation and estimation error rates (in minimax sense) in several function
classes. However, previous analyzed optimal CNNs are unrealistically wide and
difficult to obtain via optimization due to sparse constraints in important
function classes, including the H\"older class. We show a ResNet-type CNN can
attain the minimax optimal error rates in these classes in more plausible
situations -- it can be dense, and its width, channel size, and filter size are
constant with respect to sample size. The key idea is that we can replicate the
learning ability of Fully-connected neural networks (FNNs) by tailored CNNs, as
long as the FNNs have \textit{block-sparse} structures. Our theory is general
in a sense that we can automatically translate any approximation rate achieved
by block-sparse FNNs into that by CNNs. As an application, we derive
approximation and estimation error rates of the aformentioned type of CNNs for
the Barron and H\"older classes with the same strategy.Comment: 8 pages + References 2 pages + Supplemental material 18 page