7 research outputs found

    A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients

    Full text link
    In recent years deep artificial neural networks (DNNs) have been successfully employed in numerical simulations for a multitude of computational problems including, for example, object and face recognition, natural language processing, fraud detection, computational advertisement, and numerical approximations of partial differential equations (PDEs). These numerical simulations indicate that DNNs seem to possess the fundamental flexibility to overcome the curse of dimensionality in the sense that the number of real parameters used to describe the DNN grows at most polynomially in both the reciprocal of the prescribed approximation accuracy ε>0 \varepsilon > 0 and the dimension d∈N d \in \mathbb{N} of the function which the DNN aims to approximate in such computational problems. There is also a large number of rigorous mathematical approximation results for artificial neural networks in the scientific literature but there are only a few special situations where results in the literature can rigorously justify the success of DNNs in high-dimensional function approximation. The key contribution of this paper is to reveal that DNNs do overcome the curse of dimensionality in the numerical approximation of Kolmogorov PDEs with constant diffusion and nonlinear drift coefficients. We prove that the number of parameters used to describe the employed DNN grows at most polynomially in both the PDE dimension d∈N d \in \mathbb{N} and the reciprocal of the prescribed approximation accuracy ε>0 \varepsilon > 0 . A crucial ingredient in our proof is the fact that the artificial neural network used to approximate the solution of the PDE is indeed a deep artificial neural network with a large number of hidden layers.Comment: 48 page

    Lower bounds on the complexity of approximating continuous functions by sigmoidal neural networks

    No full text
    Abstract We calculate lower bounds on the size of sigmoidal neural networks that approximate continuous functions. In particular, we show that for the approximation of polynomials the network size has to grow as \Omega ((log k)1=4) where k is the degree of the polynomials. This bound is valid for any input dimension, i.e. independently of the number of variables. The result is obtained by introducing a new method employing upper bounds on the Vapnik-Chervonenkis dimension for proving lower bounds on the size of networks that approximate continuous functions

    Lower Bounds on the Complexity of Approximating Continuous Functions by Sigmoidal Neural Networks

    No full text
    We calculate lower bounds on the size of sigmoidal neural networks that approximate continuous functions. In particular, we show that for the approximation of polynomials the network size has to grow as\Omega\Gamma/207 k) 1=4 ) where k is the degree of the polynomials. This bound is valid for any input dimension, i.e. independently of the number of variables. The result is obtained by introducing a new method employing upper bounds on the Vapnik-Chervonenkis dimension for proving lower bounds on the size of networks that approximate continuous functions. 1 Introduction Sigmoidal neural networks are known to be universal approximators. This is one of the theoretical results most frequently cited to justify the use of sigmoidal neural networks in applications. By this statement one refers to the fact that sigmoidal neural networks have been shown to be able to approximate any continuous function arbitrarily well. Numerous results in the literature have established variants of..
    corecore