60,799 research outputs found
Optimal approximation of piecewise smooth functions using deep ReLU neural networks
We study the necessary and sufficient complexity of ReLU neural networks---in
terms of depth and number of weights---which is required for approximating
classifier functions in . As a model class, we consider the set
of possibly discontinuous piecewise
functions , where the different smooth regions
of are separated by hypersurfaces. For dimension ,
regularity , and accuracy , we construct artificial
neural networks with ReLU activation function that approximate functions from
up to error of . The
constructed networks have a fixed number of layers, depending only on and
, and they have many nonzero weights,
which we prove to be optimal. In addition to the optimality in terms of the
number of weights, we show that in order to achieve the optimal approximation
rate, one needs ReLU networks of a certain depth. Precisely, for piecewise
functions, this minimal depth is given---up to a
multiplicative constant---by . Up to a log factor, our constructed
networks match this bound. This partly explains the benefits of depth for ReLU
networks by showing that deep networks are necessary to achieve efficient
approximation of (piecewise) smooth functions. Finally, we analyze
approximation in high-dimensional spaces where the function to be
approximated can be factorized into a smooth dimension reducing feature map
and classifier function ---defined on a low-dimensional feature
space---as . We show that in this case the approximation rate
depends only on the dimension of the feature space and not the input dimension.Comment: Generalized some estimates to norms for $0<p<\infty
On Size-Independent Sample Complexity of ReLU Networks
We study the sample complexity of learning ReLU neural networks from the
point of view of generalization. Given norm constraints on the weight matrices,
a common approach is to estimate the Rademacher complexity of the associated
function class. Previously Golowich-Rakhlin-Shamir (2020) obtained a bound
independent of the network size (scaling with a product of Frobenius norms)
except for a factor of the square-root depth. We give a refinement which often
has no explicit depth-dependence at all.Comment: 4 page
Spectrally-normalized margin bounds for neural networks
This paper presents a margin-based multiclass generalization bound for neural
networks that scales with their margin-normalized "spectral complexity": their
Lipschitz constant, meaning the product of the spectral norms of the weight
matrices, times a certain correction factor. This bound is empirically
investigated for a standard AlexNet network trained with SGD on the mnist and
cifar10 datasets, with both original and random labels; the bound, the
Lipschitz constants, and the excess risks are all in direct correlation,
suggesting both that SGD selects predictors whose complexity scales with the
difficulty of the learning task, and secondly that the presented bound is
sensitive to this complexity.Comment: Comparison to arXiv v1: 1-norm in main bound refined to
(2,1)-group-norm. Comparison to NIPS camera ready: typo fixe
- …