99,170 research outputs found
A path-integral approach to Bayesian inference for inverse problems using the semiclassical approximation
We demonstrate how path integrals often used in problems of theoretical
physics can be adapted to provide a machinery for performing Bayesian inference
in function spaces. Such inference comes about naturally in the study of
inverse problems of recovering continuous (infinite dimensional) coefficient
functions from ordinary or partial differential equations (ODE, PDE), a problem
which is typically ill-posed. Regularization of these problems using
function spaces (Tikhonov regularization) is equivalent to Bayesian
probabilistic inference, using a Gaussian prior. The Bayesian interpretation of
inverse problem regularization is useful since it allows one to quantify and
characterize error and degree of precision in the solution of inverse problems,
as well as examine assumptions made in solving the problem -- namely whether
the subjective choice of regularization is compatible with prior knowledge.
Using path-integral formalism, Bayesian inference can be explored through
various perturbative techniques, such as the semiclassical approximation, which
we use in this manuscript. Perturbative path-integral approaches, while
offering alternatives to computational approaches like Markov-Chain-Monte-Carlo
(MCMC), also provide natural starting points for MCMC methods that can be used
to refine approximations.
In this manuscript, we illustrate a path-integral formulation for inverse
problems and demonstrate it on an inverse problem in membrane biophysics as
well as inverse problems in potential theories involving the Poisson equation.Comment: Fixed some spelling errors and the author affiliations. This is the
version accepted for publication by J Stat Phy
Fast Gibbs sampling for high-dimensional Bayesian inversion
Solving ill-posed inverse problems by Bayesian inference has recently
attracted considerable attention. Compared to deterministic approaches, the
probabilistic representation of the solution by the posterior distribution can
be exploited to explore and quantify its uncertainties. In applications where
the inverse solution is subject to further analysis procedures, this can be a
significant advantage. Alongside theoretical progress, various new
computational techniques allow to sample very high dimensional posterior
distributions: In [Lucka2012], a Markov chain Monte Carlo (MCMC) posterior
sampler was developed for linear inverse problems with -type priors. In
this article, we extend this single component Gibbs-type sampler to a wide
range of priors used in Bayesian inversion, such as general priors
with additional hard constraints. Besides a fast computation of the
conditional, single component densities in an explicit, parameterized form, a
fast, robust and exact sampling from these one-dimensional densities is key to
obtain an efficient algorithm. We demonstrate that a generalization of slice
sampling can utilize their specific structure for this task and illustrate the
performance of the resulting slice-within-Gibbs samplers by different computed
examples. These new samplers allow us to perform sample-based Bayesian
inference in high-dimensional scenarios with certain priors for the first time,
including the inversion of computed tomography (CT) data with the popular
isotropic total variation (TV) prior.Comment: submitted to "Inverse Problems
Fully probabilistic deep models for forward and inverse problems in parametric PDEs
We introduce a physics-driven deep latent variable model (PDDLVM) to learn
simultaneously parameter-to-solution (forward) and solution-to-parameter
(inverse) maps of parametric partial differential equations (PDEs). Our
formulation leverages conventional PDE discretization techniques, deep neural
networks, probabilistic modelling, and variational inference to assemble a
fully probabilistic coherent framework. In the posited probabilistic model,
both the forward and inverse maps are approximated as Gaussian distributions
with a mean and covariance parameterized by deep neural networks. The PDE
residual is assumed to be an observed random vector of value zero, hence we
model it as a random vector with a zero mean and a user-prescribed covariance.
The model is trained by maximizing the probability, that is the evidence or
marginal likelihood, of observing a residual of zero by maximizing the evidence
lower bound (ELBO). Consequently, the proposed methodology does not require any
independent PDE solves and is physics-informed at training time, allowing the
real-time solution of PDE forward and inverse problems after training. The
proposed framework can be easily extended to seamlessly integrate observed data
to solve inverse problems and to build generative models. We demonstrate the
efficiency and robustness of our method on finite element discretized
parametric PDE problems such as linear and nonlinear Poisson problems, elastic
shells with complex 3D geometries, and time-dependent nonlinear and
inhomogeneous PDEs using a physics-informed neural network (PINN)
discretization. We achieve up to three orders of magnitude speed-up after
training compared to traditional finite element method (FEM), while outputting
coherent uncertainty estimates
Fully probabilistic deep models for forward and inverse problems in parametric PDEs
We introduce a physics-driven deep latent variable model (PDDLVM) to learn simultaneously parameter-to-solution (forward) and solution-to-parameter (inverse) maps of parametric partial differential equations (PDEs). Our formulation leverages conventional PDE discretization techniques, deep neural networks, probabilistic modelling, and variational inference to assemble a fully probabilistic coherent framework. In the posited probabilistic model, both the forward and inverse maps are approximated as Gaussian distributions with a mean and covariance parameterized by deep neural networks. The PDE residual is assumed to be an observed random vector of value zero, hence we model it as a random vector with a zero mean and a user-prescribed covariance. The model is trained by maximizing the probability, that is the evidence or marginal likelihood, of observing a residual of zero by maximizing the evidence lower bound (ELBO). Consequently, the proposed methodology does not require any independent PDE solves and is physics-informed at training time, allowing the real-time solution of PDE forward and inverse problems after training. The proposed framework can be easily extended to seamlessly integrate observed data to solve inverse problems and to build generative models. We demonstrate the efficiency and robustness of our method on finite element discretized parametric PDE problems such as linear and nonlinear Poisson problems, elastic shells with complex 3D geometries, and time-dependent nonlinear and inhomogeneous PDEs using a physics-informed neural network (PINN) discretization. We achieve up to three orders of magnitude speed-up after training compared to traditional finite element method (FEM), while outputting coherent uncertainty estimates
HMCLab: a framework for solving diverse geophysical inverse problems using the Hamiltonian Monte Carlo method
The use of the probabilistic approach to solve inverse problems is becoming
more popular in the geophysical community, thanks to its ability to address
nonlinear forward problems and to provide uncertainty quantification. However,
such strategy is often tailored to specific applications and therefore there is
a lack of a common platform for solving a range of different geophysical
inverse problems and showing potential and pitfalls. We demonstrate a common
framework to solve such inverse problems ranging from, e.g, earthquake source
location to potential field data inversion and seismic tomography. Within this
approach, we can provide probabilities related to certain properties or
structures of the subsurface. Thanks to its ability to address high-dimensional
problems, the Hamiltonian Monte Carlo (HMC) algorithm has emerged as the
state-of-the-art tool for solving geophysical inverse problems within the
probabilistic framework. HMC requires the computation of gradients, which can
be obtained by adjoint methods, making the solution of tomographic problems
ultimately feasible. These results can be obtained with "HMCLab", a tool for
solving a range of different geophysical inverse problems using sampling
methods, focusing in particular on the HMC algorithm. HMCLab consists of a set
of samplers and a set of geophysical forward problems. For each problem its
misfit function and gradient computation are provided and, in addition, a set
of prior models can be combined to inject additional information into the
inverse problem. This allows users to experiment with probabilistic inverse
problems and also address real-world studies. We show how to solve a selected
set of problems within this framework using variants of the HMC algorithm and
analyze the results. HMCLab is provided as an open source package written both
in Python and Julia, welcoming contributions from the community.Comment: 21 pages, 4 figure
- …