3,098 research outputs found
Likelihood-informed dimension reduction for nonlinear inverse problems
The intrinsic dimensionality of an inverse problem is affected by prior
information, the accuracy and number of observations, and the smoothing
properties of the forward operator. From a Bayesian perspective, changes from
the prior to the posterior may, in many problems, be confined to a relatively
low-dimensional subspace of the parameter space. We present a dimension
reduction approach that defines and identifies such a subspace, called the
"likelihood-informed subspace" (LIS), by characterizing the relative influences
of the prior and the likelihood over the support of the posterior distribution.
This identification enables new and more efficient computational methods for
Bayesian inference with nonlinear forward models and Gaussian priors. In
particular, we approximate the posterior distribution as the product of a
lower-dimensional posterior defined on the LIS and the prior distribution
marginalized onto the complementary subspace. Markov chain Monte Carlo sampling
can then proceed in lower dimensions, with significant gains in computational
efficiency. We also introduce a Rao-Blackwellization strategy that
de-randomizes Monte Carlo estimates of posterior expectations for additional
variance reduction. We demonstrate the efficiency of our methods using two
numerical examples: inference of permeability in a groundwater system governed
by an elliptic PDE, and an atmospheric remote sensing problem based on Global
Ozone Monitoring System (GOMOS) observations
Langevin and Hamiltonian based Sequential MCMC for Efficient Bayesian Filtering in High-dimensional Spaces
Nonlinear non-Gaussian state-space models arise in numerous applications in
statistics and signal processing. In this context, one of the most successful
and popular approximation techniques is the Sequential Monte Carlo (SMC)
algorithm, also known as particle filtering. Nevertheless, this method tends to
be inefficient when applied to high dimensional problems. In this paper, we
focus on another class of sequential inference methods, namely the Sequential
Markov Chain Monte Carlo (SMCMC) techniques, which represent a promising
alternative to SMC methods. After providing a unifying framework for the class
of SMCMC approaches, we propose novel efficient strategies based on the
principle of Langevin diffusion and Hamiltonian dynamics in order to cope with
the increasing number of high-dimensional applications. Simulation results show
that the proposed algorithms achieve significantly better performance compared
to existing algorithms
Monte Carlo Simulation of Quantum Computation
The many-body dynamics of a quantum computer can be reduced to the time
evolution of non-interacting quantum bits in auxiliary fields by use of the
Hubbard-Stratonovich representation of two-bit quantum gates in terms of
one-bit gates. This makes it possible to perform the stochastic simulation of a
quantum algorithm, based on the Monte Carlo evaluation of an integral of
dimension polynomial in the number of quantum bits. As an example, the
simulation of the quantum circuit for the Fast Fourier Transform is discussed.Comment: 12 pages Latex, 2 Postscript figures, to appear in Proceedings of the
IMACS (International Association for Mathematics and Computers in Simulation)
Conference on Monte Carlo Methods, Brussels, April 9
Scalable iterative methods for sampling from massive Gaussian random vectors
Sampling from Gaussian Markov random fields (GMRFs), that is multivariate
Gaussian ran- dom vectors that are parameterised by the inverse of their
covariance matrix, is a fundamental problem in computational statistics. In
this paper, we show how we can exploit arbitrarily accu- rate approximations to
a GMRF to speed up Krylov subspace sampling methods. We also show that these
methods can be used when computing the normalising constant of a large
multivariate Gaussian distribution, which is needed for both any
likelihood-based inference method. The method we derive is also applicable to
other structured Gaussian random vectors and, in particu- lar, we show that
when the precision matrix is a perturbation of a (block) circulant matrix, it
is still possible to derive O(n log n) sampling schemes.Comment: 17 Pages, 4 Figure
- …