42,965 research outputs found

    Approximation in shift-invariant spaces with deep ReLU neural networks

    Full text link
    We study the expressive power of deep ReLU neural networks for approximating functions in dilated shift-invariant spaces, which are widely used in signal processing, image processing, communications and so on. Approximation error bounds are estimated with respect to the width and depth of neural networks. The network construction is based on the bit extraction and data-fitting capacity of deep neural networks. As applications of our main results, the approximation rates of classical function spaces such as Sobolev spaces and Besov spaces are obtained. We also give lower bounds of the Lp(1≤p≤∞)L^p (1\le p \le \infty) approximation error for Sobolev spaces, which show that our construction of neural network is asymptotically optimal up to a logarithmic factor

    Generic bounds on the approximation error for physics-informed (and) operator learning

    Full text link
    We propose a very general framework for deriving rigorous bounds on the approximation error for physics-informed neural networks (PINNs) and operator learning architectures such as DeepONets and FNOs as well as for physics-informed operator learning. These bounds guarantee that PINNs and (physics-informed) DeepONets or FNOs will efficiently approximate the underlying solution or solution operator of generic partial differential equations (PDEs). Our framework utilizes existing neural network approximation results to obtain bounds on more involved learning architectures for PDEs. We illustrate the general framework by deriving the first rigorous bounds on the approximation error of physics-informed operator learning and by showing that PINNs (and physics-informed DeepONets and FNOs) mitigate the curse of dimensionality in approximating nonlinear parabolic PDEs

    Error estimates of deep learning methods for the nonstationary Magneto-hydrodynamics equations

    Full text link
    In this study, we prove rigourous bounds on the error and stability analysis of deep learning methods for the nonstationary Magneto-hydrodynamics equations. We obtain the approximate ability of the neural network by the convergence of a loss function and the convergence of a Deep Neural Network (DNN) to the exact solution. Moreover, we derive explicit error estimates for the solution computed by optimizing the loss function in the DNN approximation of the solution

    Approximation bounds for convolutional neural networks in operator learning

    Full text link
    Recently, deep Convolutional Neural Networks (CNNs) have proven to be successful when employed in areas such as reduced order modeling of parametrized PDEs. Despite their accuracy and efficiency, the approaches available in the literature still lack a rigorous justification on their mathematical foundations. Motivated by this fact, in this paper we derive rigorous error bounds for the approximation of nonlinear operators by means of CNN models. More precisely, we address the case in which an operator maps a finite dimensional input μ∈Rp\boldsymbol{\mu}\in\mathbb{R}^{p} onto a functional output uμ:[0,1]d→Ru_{\boldsymbol{\mu}}:[0,1]^{d}\to\mathbb{R}, and a neural network model is used to approximate a discretized version of the input-to-output map. The resulting error estimates provide a clear interpretation of the hyperparameters defining the neural network architecture. All the proofs are constructive, and they ultimately reveal a deep connection between CNNs and the Fourier transform. Finally, we complement the derived error bounds by numerical experiments that illustrate their application
    • …
    corecore