204 research outputs found

    Multilevel Picard approximation algorithm for semilinear partial integro-differential equations and its complexity analysis

    Full text link
    In this paper we introduce a multilevel Picard approximation algorithm for semilinear parabolic partial integro-differential equations (PIDEs). We prove that the numerical approximation scheme converges to the unique viscosity solution of the PIDE under consideration. To that end, we derive a Feynman-Kac representation for the unique viscosity solution of the semilinear PIDE, extending the classical Feynman-Kac representation for linear PIDEs. Furthermore, we show that the algorithm does not suffer from the curse of dimensionality, i.e. the computational complexity of the algorithm is bounded polynomially in the dimension dd and the prescribed reciprocal of the accuracy Ξ΅\varepsilon

    An overview on deep learning-based approximation methods for partial differential equations

    Full text link
    It is one of the most challenging problems in applied mathematics to approximatively solve high-dimensional partial differential equations (PDEs). Recently, several deep learning-based approximation algorithms for attacking this problem have been proposed and tested numerically on a number of examples of high-dimensional PDEs. This has given rise to a lively field of research in which deep learning-based methods and related Monte Carlo methods are applied to the approximation of high-dimensional PDEs. In this article we offer an introduction to this field of research, we review some of the main ideas of deep learning-based approximation methods for PDEs, we revisit one of the central mathematical results for deep neural network approximations for PDEs, and we provide an overview of the recent literature in this area of research.Comment: 23 page

    Deep ReLU neural networks overcome the curse of dimensionality when approximating semilinear partial integro-differential equations

    Full text link
    In this paper we consider PIDEs with gradient-independent Lipschitz continuous nonlinearities and prove that deep neural networks with ReLU activation function can approximate solutions of such semilinear PIDEs without curse of dimensionality in the sense that the required number of parameters in the deep neural networks increases at most polynomially in both the dimension d d of the corresponding PIDE and the reciprocal of the prescribed accuracy Ο΅\epsilon

    Deep neural networks with ReLU, leaky ReLU, and softplus activation provably overcome the curse of dimensionality for Kolmogorov partial differential equations with Lipschitz nonlinearities in the LpL^p-sense

    Full text link
    Recently, several deep learning (DL) methods for approximating high-dimensional partial differential equations (PDEs) have been proposed. The interest that these methods have generated in the literature is in large part due to simulations which appear to demonstrate that such DL methods have the capacity to overcome the curse of dimensionality (COD) for PDEs in the sense that the number of computational operations they require to achieve a certain approximation accuracy Ρ∈(0,∞)\varepsilon\in(0,\infty) grows at most polynomially in the PDE dimension d∈Nd\in\mathbb N and the reciprocal of Ξ΅\varepsilon. While there is thus far no mathematical result that proves that one of such methods is indeed capable of overcoming the COD, there are now a number of rigorous results in the literature that show that deep neural networks (DNNs) have the expressive power to approximate PDE solutions without the COD in the sense that the number of parameters used to describe the approximating DNN grows at most polynomially in both the PDE dimension d∈Nd\in\mathbb N and the reciprocal of the approximation accuracy Ξ΅>0\varepsilon>0. Roughly speaking, in the literature it is has been proved for every T>0T>0 that solutions ud ⁣:[0,T]Γ—Rdβ†’Ru_d\colon [0,T]\times\mathbb R^d\to \mathbb R, d∈Nd\in\mathbb N, of semilinear heat PDEs with Lipschitz continuous nonlinearities can be approximated by DNNs with ReLU activation at the terminal time in the L2L^2-sense without the COD provided that the initial value functions Rdβˆ‹x↦ud(0,x)∈R\mathbb R^d\ni x\mapsto u_d(0,x)\in\mathbb R, d∈Nd\in\mathbb N, can be approximated by ReLU DNNs without the COD. It is the key contribution of this work to generalize this result by establishing this statement in the LpL^p-sense with p∈(0,∞)p\in(0,\infty) and by allowing the activation function to be more general covering the ReLU, the leaky ReLU, and the softplus activation functions as special cases.Comment: 52 page
    • …
    corecore