60,799 research outputs found

    Optimal approximation of piecewise smooth functions using deep ReLU neural networks

    Full text link
    We study the necessary and sufficient complexity of ReLU neural networks---in terms of depth and number of weights---which is required for approximating classifier functions in L2L^2. As a model class, we consider the set Eβ(Rd)\mathcal{E}^\beta (\mathbb R^d) of possibly discontinuous piecewise CβC^\beta functions f:[−1/2,1/2]d→Rf : [-1/2, 1/2]^d \to \mathbb R, where the different smooth regions of ff are separated by CβC^\beta hypersurfaces. For dimension d≥2d \geq 2, regularity β>0\beta > 0, and accuracy ε>0\varepsilon > 0, we construct artificial neural networks with ReLU activation function that approximate functions from Eβ(Rd)\mathcal{E}^\beta(\mathbb R^d) up to L2L^2 error of ε\varepsilon. The constructed networks have a fixed number of layers, depending only on dd and β\beta, and they have O(ε−2(d−1)/β)O(\varepsilon^{-2(d-1)/\beta}) many nonzero weights, which we prove to be optimal. In addition to the optimality in terms of the number of weights, we show that in order to achieve the optimal approximation rate, one needs ReLU networks of a certain depth. Precisely, for piecewise Cβ(Rd)C^\beta(\mathbb R^d) functions, this minimal depth is given---up to a multiplicative constant---by β/d\beta/d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function ff to be approximated can be factorized into a smooth dimension reducing feature map τ\tau and classifier function gg---defined on a low-dimensional feature space---as f=g∘τf = g \circ \tau. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.Comment: Generalized some estimates to LpL^p norms for $0<p<\infty

    On Size-Independent Sample Complexity of ReLU Networks

    Full text link
    We study the sample complexity of learning ReLU neural networks from the point of view of generalization. Given norm constraints on the weight matrices, a common approach is to estimate the Rademacher complexity of the associated function class. Previously Golowich-Rakhlin-Shamir (2020) obtained a bound independent of the network size (scaling with a product of Frobenius norms) except for a factor of the square-root depth. We give a refinement which often has no explicit depth-dependence at all.Comment: 4 page

    Spectrally-normalized margin bounds for neural networks

    Full text link
    This paper presents a margin-based multiclass generalization bound for neural networks that scales with their margin-normalized "spectral complexity": their Lipschitz constant, meaning the product of the spectral norms of the weight matrices, times a certain correction factor. This bound is empirically investigated for a standard AlexNet network trained with SGD on the mnist and cifar10 datasets, with both original and random labels; the bound, the Lipschitz constants, and the excess risks are all in direct correlation, suggesting both that SGD selects predictors whose complexity scales with the difficulty of the learning task, and secondly that the presented bound is sensitive to this complexity.Comment: Comparison to arXiv v1: 1-norm in main bound refined to (2,1)-group-norm. Comparison to NIPS camera ready: typo fixe
    • …
    corecore