42,965 research outputs found
Approximation in shift-invariant spaces with deep ReLU neural networks
We study the expressive power of deep ReLU neural networks for approximating
functions in dilated shift-invariant spaces, which are widely used in signal
processing, image processing, communications and so on. Approximation error
bounds are estimated with respect to the width and depth of neural networks.
The network construction is based on the bit extraction and data-fitting
capacity of deep neural networks. As applications of our main results, the
approximation rates of classical function spaces such as Sobolev spaces and
Besov spaces are obtained. We also give lower bounds of the approximation error for Sobolev spaces, which show that our
construction of neural network is asymptotically optimal up to a logarithmic
factor
Generic bounds on the approximation error for physics-informed (and) operator learning
We propose a very general framework for deriving rigorous bounds on the
approximation error for physics-informed neural networks (PINNs) and operator
learning architectures such as DeepONets and FNOs as well as for
physics-informed operator learning. These bounds guarantee that PINNs and
(physics-informed) DeepONets or FNOs will efficiently approximate the
underlying solution or solution operator of generic partial differential
equations (PDEs). Our framework utilizes existing neural network approximation
results to obtain bounds on more involved learning architectures for PDEs. We
illustrate the general framework by deriving the first rigorous bounds on the
approximation error of physics-informed operator learning and by showing that
PINNs (and physics-informed DeepONets and FNOs) mitigate the curse of
dimensionality in approximating nonlinear parabolic PDEs
Error estimates of deep learning methods for the nonstationary Magneto-hydrodynamics equations
In this study, we prove rigourous bounds on the error and stability analysis
of deep learning methods for the nonstationary Magneto-hydrodynamics equations.
We obtain the approximate ability of the neural network by the convergence of a
loss function and the convergence of a Deep Neural Network (DNN) to the exact
solution. Moreover, we derive explicit error estimates for the solution
computed by optimizing the loss function in the DNN approximation of the
solution
Approximation bounds for convolutional neural networks in operator learning
Recently, deep Convolutional Neural Networks (CNNs) have proven to be
successful when employed in areas such as reduced order modeling of
parametrized PDEs. Despite their accuracy and efficiency, the approaches
available in the literature still lack a rigorous justification on their
mathematical foundations. Motivated by this fact, in this paper we derive
rigorous error bounds for the approximation of nonlinear operators by means of
CNN models. More precisely, we address the case in which an operator maps a
finite dimensional input onto a functional
output , and a neural network
model is used to approximate a discretized version of the input-to-output map.
The resulting error estimates provide a clear interpretation of the
hyperparameters defining the neural network architecture. All the proofs are
constructive, and they ultimately reveal a deep connection between CNNs and the
Fourier transform. Finally, we complement the derived error bounds by numerical
experiments that illustrate their application
- …