6,318 research outputs found
The Bayesian Formulation of EIT: Analysis and Algorithms
We provide a rigorous Bayesian formulation of the EIT problem in an infinite
dimensional setting, leading to well-posedness in the Hellinger metric with
respect to the data. We focus particularly on the reconstruction of binary
fields where the interface between different media is the primary unknown. We
consider three different prior models - log-Gaussian, star-shaped and level
set. Numerical simulations based on the implementation of MCMC are performed,
illustrating the advantages and disadvantages of each type of prior in the
reconstruction, in the case where the true conductivity is a binary field, and
exhibiting the properties of the resulting posterior distribution.Comment: 30 pages, 10 figure
MAP Estimators for Piecewise Continuous Inversion
We study the inverse problem of estimating a field from data comprising a
finite set of nonlinear functionals of , subject to additive noise; we
denote this observed data by . Our interest is in the reconstruction of
piecewise continuous fields in which the discontinuity set is described by a
finite number of geometric parameters. Natural applications include groundwater
flow and electrical impedance tomography. We take a Bayesian approach, placing
a prior distribution on and determining the conditional distribution on
given the data . It is then natural to study maximum a posterior (MAP)
estimators. Recently (Dashti et al 2013) it has been shown that MAP estimators
can be characterised as minimisers of a generalised Onsager-Machlup functional,
in the case where the prior measure is a Gaussian random field. We extend this
theory to a more general class of prior distributions which allows for
piecewise continuous fields. Specifically, the prior field is assumed to be
piecewise Gaussian with random interfaces between the different Gaussians
defined by a finite number of parameters. We also make connections with recent
work on MAP estimators for linear problems and possibly non-Gaussian priors
(Helin, Burger 2015) which employs the notion of Fomin derivative.
In showing applicability of our theory we focus on the groundwater flow and
EIT models, though the theory holds more generally. Numerical experiments are
implemented for the groundwater flow model, demonstrating the feasibility of
determining MAP estimators for these piecewise continuous models, but also that
the geometric formulation can lead to multiple nearby (local) MAP estimators.
We relate these MAP estimators to the behaviour of output from MCMC samples of
the posterior, obtained using a state-of-the-art function space
Metropolis-Hastings method.Comment: 53 pages, 21 figure
Analysis of the ensemble Kalman filter for inverse problems
The ensemble Kalman filter (EnKF) is a widely used methodology for state
estimation in partial, noisily observed dynamical systems, and for parameter
estimation in inverse problems. Despite its widespread use in the geophysical
sciences, and its gradual adoption in many other areas of application, analysis
of the method is in its infancy. Furthermore, much of the existing analysis
deals with the large ensemble limit, far from the regime in which the method is
typically used. The goal of this paper is to analyze the method when applied to
inverse problems with fixed ensemble size. A continuous-time limit is derived
and the long-time behavior of the resulting dynamical system is studied. Most
of the rigorous analysis is confined to the linear forward problem, where we
demonstrate that the continuous time limit of the EnKF corresponds to a set of
gradient flows for the data misfit in each ensemble member, coupled through a
common pre-conditioner which is the empirical covariance matrix of the
ensemble. Numerical results demonstrate that the conclusions of the analysis
extend beyond the linear inverse problem setting. Numerical experiments are
also given which demonstrate the benefits of various extensions of the basic
methodology
Gaussian Approximations of Small Noise Diffusions in Kullback-Leibler Divergence
We study Gaussian approximations to the distribution of a diffusion. The
approximations are easy to compute: they are defined by two simple ordinary
differential equations for the mean and the covariance. Time correlations can
also be computed via solution of a linear stochastic differential equation. We
show, using the Kullback-Leibler divergence, that the approximations are
accurate in the small noise regime. An analogous discrete time setting is also
studied. The results provide both theoretical support for the use of Gaussian
processes in the approximation of diffusions, and methodological guidance in
the construction of Gaussian approximations in applications
Inverse optimal transport
Discrete optimal transportation problems arise in various contexts in
engineering, the sciences and the social sciences. Often the underlying cost
criterion is unknown, or only partly known, and the observed optimal solutions
are corrupted by noise. In this paper we propose a systematic approach to infer
unknown costs from noisy observations of optimal transportation plans. The
algorithm requires only the ability to solve the forward optimal transport
problem, which is a linear program, and to generate random numbers. It has a
Bayesian interpretation, and may also be viewed as a form of stochastic
optimization.
We illustrate the developed methodologies using the example of international
migration flows. Reported migration flow data captures (noisily) the number of
individuals moving from one country to another in a given period of time. It
can be interpreted as a noisy observation of an optimal transportation map,
with costs related to the geographical position of countries. We use a
graph-based formulation of the problem, with countries at the nodes of graphs
and non-zero weighted adjacencies only on edges between countries which share a
border. We use the proposed algorithm to estimate the weights, which represent
cost of transition, and to quantify uncertainty in these weights
Hyperparameter Estimation in Bayesian MAP Estimation: Parameterizations and Consistency
The Bayesian formulation of inverse problems is attractive for three primary
reasons: it provides a clear modelling framework; means for uncertainty
quantification; and it allows for principled learning of hyperparameters. The
posterior distribution may be explored by sampling methods, but for many
problems it is computationally infeasible to do so. In this situation maximum a
posteriori (MAP) estimators are often sought. Whilst these are relatively cheap
to compute, and have an attractive variational formulation, a key drawback is
their lack of invariance under change of parameterization. This is a
particularly significant issue when hierarchical priors are employed to learn
hyperparameters. In this paper we study the effect of the choice of
parameterization on MAP estimators when a conditionally Gaussian hierarchical
prior distribution is employed. Specifically we consider the centred
parameterization, the natural parameterization in which the unknown state is
solved for directly, and the noncentred parameterization, which works with a
whitened Gaussian as the unknown state variable, and arises when considering
dimension-robust MCMC algorithms; MAP estimation is well-defined in the
nonparametric setting only for the noncentred parameterization. However, we
show that MAP estimates based on the noncentred parameterization are not
consistent as estimators of hyperparameters; conversely, we show that limits of
finite-dimensional centred MAP estimators are consistent as the dimension tends
to infinity. We also consider empirical Bayesian hyperparameter estimation,
show consistency of these estimates, and demonstrate that they are more robust
with respect to noise than centred MAP estimates. An underpinning concept
throughout is that hyperparameters may only be recovered up to measure
equivalence, a well-known phenomenon in the context of the Ornstein-Uhlenbeck
process.Comment: 36 pages, 8 figure
Inverse Problems and Data Assimilation
These notes are designed with the aim of providing a clear and concise
introduction to the subjects of Inverse Problems and Data Assimilation, and
their inter-relations, together with citations to some relevant literature in
this area. The first half of the notes is dedicated to studying the Bayesian
framework for inverse problems. Techniques such as importance sampling and
Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the
desirable property that in the limit of an infinite number of samples they
reproduce the full posterior distribution. Since it is often computationally
intensive to implement these methods, especially in high dimensional problems,
approximate techniques such as approximating the posterior by a Dirac or a
Gaussian distribution are discussed. The second half of the notes cover data
assimilation. This refers to a particular class of inverse problems in which
the unknown parameter is the initial condition of a dynamical system, and in
the stochastic dynamics case the subsequent states of the system, and the data
comprises partial and noisy observations of that (possibly stochastic)
dynamical system. We will also demonstrate that methods developed in data
assimilation may be employed to study generic inverse problems, by introducing
an artificial time to generate a sequence of probability measures interpolating
from the prior to the posterior
- …