204 research outputs found
Multilevel Picard approximation algorithm for semilinear partial integro-differential equations and its complexity analysis
In this paper we introduce a multilevel Picard approximation algorithm for
semilinear parabolic partial integro-differential equations (PIDEs). We prove
that the numerical approximation scheme converges to the unique viscosity
solution of the PIDE under consideration. To that end, we derive a Feynman-Kac
representation for the unique viscosity solution of the semilinear PIDE,
extending the classical Feynman-Kac representation for linear PIDEs.
Furthermore, we show that the algorithm does not suffer from the curse of
dimensionality, i.e. the computational complexity of the algorithm is bounded
polynomially in the dimension and the prescribed reciprocal of the accuracy
An overview on deep learning-based approximation methods for partial differential equations
It is one of the most challenging problems in applied mathematics to
approximatively solve high-dimensional partial differential equations (PDEs).
Recently, several deep learning-based approximation algorithms for attacking
this problem have been proposed and tested numerically on a number of examples
of high-dimensional PDEs. This has given rise to a lively field of research in
which deep learning-based methods and related Monte Carlo methods are applied
to the approximation of high-dimensional PDEs. In this article we offer an
introduction to this field of research, we review some of the main ideas of
deep learning-based approximation methods for PDEs, we revisit one of the
central mathematical results for deep neural network approximations for PDEs,
and we provide an overview of the recent literature in this area of research.Comment: 23 page
Deep ReLU neural networks overcome the curse of dimensionality when approximating semilinear partial integro-differential equations
In this paper we consider PIDEs with gradient-independent Lipschitz
continuous nonlinearities and prove that deep neural networks with ReLU
activation function can approximate solutions of such semilinear PIDEs without
curse of dimensionality in the sense that the required number of parameters in
the deep neural networks increases at most polynomially in both the dimension of the corresponding PIDE and the reciprocal of the prescribed accuracy
Deep neural networks with ReLU, leaky ReLU, and softplus activation provably overcome the curse of dimensionality for Kolmogorov partial differential equations with Lipschitz nonlinearities in the -sense
Recently, several deep learning (DL) methods for approximating
high-dimensional partial differential equations (PDEs) have been proposed. The
interest that these methods have generated in the literature is in large part
due to simulations which appear to demonstrate that such DL methods have the
capacity to overcome the curse of dimensionality (COD) for PDEs in the sense
that the number of computational operations they require to achieve a certain
approximation accuracy grows at most polynomially in
the PDE dimension and the reciprocal of . While
there is thus far no mathematical result that proves that one of such methods
is indeed capable of overcoming the COD, there are now a number of rigorous
results in the literature that show that deep neural networks (DNNs) have the
expressive power to approximate PDE solutions without the COD in the sense that
the number of parameters used to describe the approximating DNN grows at most
polynomially in both the PDE dimension and the reciprocal of
the approximation accuracy . Roughly speaking, in the literature
it is has been proved for every that solutions , , of semilinear heat PDEs
with Lipschitz continuous nonlinearities can be approximated by DNNs with ReLU
activation at the terminal time in the -sense without the COD provided
that the initial value functions , , can be approximated by ReLU DNNs without the COD. It is
the key contribution of this work to generalize this result by establishing
this statement in the -sense with and by allowing the
activation function to be more general covering the ReLU, the leaky ReLU, and
the softplus activation functions as special cases.Comment: 52 page
- β¦