39 research outputs found
Stochastic finite differences for elliptic diffusion equations in stratified domains
International audienceWe describe Monte Carlo algorithms to solve elliptic partial differen- tial equations with piecewise constant diffusion coefficients and general boundary conditions including Robin and transmission conditions as well as a damping term. The treatment of the boundary conditions is done via stochastic finite differences techniques which possess an higher order than the usual methods. The simulation of Brownian paths inside the domain relies on variations around the walk on spheres method with or without killing. We check numerically the efficiency of our algorithms on various examples of diffusion equations illustrating each of the new techniques introduced here
DeepMartNet -- A Martingale based Deep Neural Network learning algorithm for Eigenvalue Problems in High Dimensions
In this paper, we propose a neural network learning algorithm for finding
eigenvalue and eigenfunction for elliptic operators in high dimensions using
the Martingale property in the stochastic representation for the eigenvalue
problem. A loss function based on the Martingale property can be used for
efficient optimization by sampling the stochastic processes associated with the
elliptic operators. The proposed algorithm can be used for Dirichlet, Neumann,
and Robin eigenvalue problems in bounded or unbounded domains
An overview on deep learning-based approximation methods for partial differential equations
It is one of the most challenging problems in applied mathematics to
approximatively solve high-dimensional partial differential equations (PDEs).
Recently, several deep learning-based approximation algorithms for attacking
this problem have been proposed and tested numerically on a number of examples
of high-dimensional PDEs. This has given rise to a lively field of research in
which deep learning-based methods and related Monte Carlo methods are applied
to the approximation of high-dimensional PDEs. In this article we offer an
introduction to this field of research, we review some of the main ideas of
deep learning-based approximation methods for PDEs, we revisit one of the
central mathematical results for deep neural network approximations for PDEs,
and we provide an overview of the recent literature in this area of research.Comment: 23 page
Monte Carlo guided Diffusion for Bayesian linear inverse problems
Ill-posed linear inverse problems that combine knowledge of the forward
measurement model with prior models arise frequently in various applications,
from computational photography to medical imaging. Recent research has focused
on solving these problems with score-based generative models (SGMs) that
produce perceptually plausible images, especially in inpainting problems. In
this study, we exploit the particular structure of the prior defined in the SGM
to formulate recovery in a Bayesian framework as a Feynman--Kac model adapted
from the forward diffusion model used to construct score-based diffusion. To
solve this Feynman--Kac problem, we propose the use of Sequential Monte Carlo
methods. The proposed algorithm, MCGdiff, is shown to be theoretically grounded
and we provide numerical simulations showing that it outperforms competing
baselines when dealing with ill-posed inverse problems.Comment: preprin
Recommended from our members
Statistical inference and computation in elliptic PDE models
Partial differential equations (PDE) are ubiquitous in describing real-world phenomena. In many statistical models, PDE are used to encode complex relationships between unknown quantities and the observed data. We investigate statistical and computational questions arising in such models, adopting an infinite-dimensional `nonparametric' framework and assuming the observed data are subject to random noise. The main PDE examples are of elliptic or parabolic type.
Chapter 2 investigates the problem of sampling from high-dimensional Bayesian posterior distributions. The main results consist of non-asymptotic computational guarantees for Langevin-type Markov chain Monte Carlo (MCMC) algorithms which scale polynomially in key quantities such as the dimension of the model, the desired precision level, and the number of available statistical measurements. The bounds hold with high probability under the distribution of the data, assuming that certain `local geometric' assumptions are fulfilled and that a good initialiser of the algorithm is available. We study a representative non-linear PDE example where the unknown is a coefficient function in a steady-state Schr\"odinger equation, and the solution to a corresponding boundary value problem is observed.
Chapter 3 studies statistical convergence rates for nonparametric Tikhonov-type estimators, which can be interpreted also as Bayesian maximum a posteriori (MAP) estimators arising from certain Gaussian process priors. The theory is derived in a general setting for non-linear inverse problems and then applied to two examples, the steady-state Schr\"odinger equation studied in Chapter \ref{sampling} and a model for the steady-state heat equation. It is shown that the rates obtained are minimax-optimal in prediction loss.
The final Chapter 4 considers a model for scalar diffusion processes with an unknown drift function which is modelled nonparametrically. It is shown that in the low frequency sampling case, when the sample consists of for some fixed sampling distance , under mild regularity assumptions, the model satisfies the local asymptotic normality (LAN) property. The key tools used are regularity estimates and spectral properties for certain parabolic and elliptic PDE related to