39 research outputs found

    Stochastic finite differences for elliptic diffusion equations in stratified domains

    Get PDF
    International audienceWe describe Monte Carlo algorithms to solve elliptic partial differen- tial equations with piecewise constant diffusion coefficients and general boundary conditions including Robin and transmission conditions as well as a damping term. The treatment of the boundary conditions is done via stochastic finite differences techniques which possess an higher order than the usual methods. The simulation of Brownian paths inside the domain relies on variations around the walk on spheres method with or without killing. We check numerically the efficiency of our algorithms on various examples of diffusion equations illustrating each of the new techniques introduced here

    DeepMartNet -- A Martingale based Deep Neural Network learning algorithm for Eigenvalue Problems in High Dimensions

    Full text link
    In this paper, we propose a neural network learning algorithm for finding eigenvalue and eigenfunction for elliptic operators in high dimensions using the Martingale property in the stochastic representation for the eigenvalue problem. A loss function based on the Martingale property can be used for efficient optimization by sampling the stochastic processes associated with the elliptic operators. The proposed algorithm can be used for Dirichlet, Neumann, and Robin eigenvalue problems in bounded or unbounded domains

    An overview on deep learning-based approximation methods for partial differential equations

    Full text link
    It is one of the most challenging problems in applied mathematics to approximatively solve high-dimensional partial differential equations (PDEs). Recently, several deep learning-based approximation algorithms for attacking this problem have been proposed and tested numerically on a number of examples of high-dimensional PDEs. This has given rise to a lively field of research in which deep learning-based methods and related Monte Carlo methods are applied to the approximation of high-dimensional PDEs. In this article we offer an introduction to this field of research, we review some of the main ideas of deep learning-based approximation methods for PDEs, we revisit one of the central mathematical results for deep neural network approximations for PDEs, and we provide an overview of the recent literature in this area of research.Comment: 23 page

    Monte Carlo guided Diffusion for Bayesian linear inverse problems

    Full text link
    Ill-posed linear inverse problems that combine knowledge of the forward measurement model with prior models arise frequently in various applications, from computational photography to medical imaging. Recent research has focused on solving these problems with score-based generative models (SGMs) that produce perceptually plausible images, especially in inpainting problems. In this study, we exploit the particular structure of the prior defined in the SGM to formulate recovery in a Bayesian framework as a Feynman--Kac model adapted from the forward diffusion model used to construct score-based diffusion. To solve this Feynman--Kac problem, we propose the use of Sequential Monte Carlo methods. The proposed algorithm, MCGdiff, is shown to be theoretically grounded and we provide numerical simulations showing that it outperforms competing baselines when dealing with ill-posed inverse problems.Comment: preprin

    DIAS Research Report 2011

    Get PDF
    corecore