155 research outputs found

    Approximation Error Bounds via Rademacher's Complexity

    Get PDF
    Approximation properties of some connectionistic models, commonly used to construct approximation schemes for optimization problems with multivariable functions as admissible solutions, are investigated. Such models are made up of linear combinations of computational units with adjustable parameters. The relationship between model complexity (number of computational units) and approximation error is investigated using tools from Statistical Learning Theory, such as Talagrand's inequality, fat-shattering dimension, and Rademacher's complexity. For some families of multivariable functions, estimates of the approximation accuracy of models with certain computational units are derived in dependence of the Rademacher's complexities of the families. The estimates improve previously-available ones, which were expressed in terms of V C dimension and derived by exploiting union-bound techniques. The results are applied to approximation schemes with certain radial-basis-functions as computational units, for which it is shown that the estimates do not exhibit the curse of dimensionality with respect to the number of variables

    Estimates of the Approximation Error Using Rademacher Complexity: Learning Vector-Valued Functions

    Get PDF
    For certain families of multivariable vector-valued functions to be approximated, the accuracy of approximation schemes made up of linear combinations of computational units containing adjustable parameters is investigated. Upper bounds on the approximation error are derived that depend on the Rademacher complexities of the families. The estimates exploit possible relationships among the components of the multivariable vector-valued functions. All such components are approximated simultaneously in such a way to use, for a desired approximation accuracy, less computational units than those required by componentwise approximation. An application to -stage optimization problems is discussed

    Optimal approximation of piecewise smooth functions using deep ReLU neural networks

    Full text link
    We study the necessary and sufficient complexity of ReLU neural networks---in terms of depth and number of weights---which is required for approximating classifier functions in L2L^2. As a model class, we consider the set EÎČ(Rd)\mathcal{E}^\beta (\mathbb R^d) of possibly discontinuous piecewise CÎČC^\beta functions f:[−1/2,1/2]d→Rf : [-1/2, 1/2]^d \to \mathbb R, where the different smooth regions of ff are separated by CÎČC^\beta hypersurfaces. For dimension d≄2d \geq 2, regularity ÎČ>0\beta > 0, and accuracy Δ>0\varepsilon > 0, we construct artificial neural networks with ReLU activation function that approximate functions from EÎČ(Rd)\mathcal{E}^\beta(\mathbb R^d) up to L2L^2 error of Δ\varepsilon. The constructed networks have a fixed number of layers, depending only on dd and ÎČ\beta, and they have O(Δ−2(d−1)/ÎČ)O(\varepsilon^{-2(d-1)/\beta}) many nonzero weights, which we prove to be optimal. In addition to the optimality in terms of the number of weights, we show that in order to achieve the optimal approximation rate, one needs ReLU networks of a certain depth. Precisely, for piecewise CÎČ(Rd)C^\beta(\mathbb R^d) functions, this minimal depth is given---up to a multiplicative constant---by ÎČ/d\beta/d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function ff to be approximated can be factorized into a smooth dimension reducing feature map τ\tau and classifier function gg---defined on a low-dimensional feature space---as f=g∘τf = g \circ \tau. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.Comment: Generalized some estimates to LpL^p norms for $0<p<\infty
    • 

    corecore