18,201 research outputs found
Approximate Bayesian computation (ABC) gives exact results under the assumption of model error
Approximate Bayesian computation (ABC) or likelihood-free inference
algorithms are used to find approximations to posterior distributions without
making explicit use of the likelihood function, depending instead on simulation
of sample data sets from the model. In this paper we show that under the
assumption of the existence of a uniform additive model error term, ABC
algorithms give exact results when sufficient summaries are used. This
interpretation allows the approximation made in many previous application
papers to be understood, and should guide the choice of metric and tolerance in
future work. ABC algorithms can be generalized by replacing the 0-1 cut-off
with an acceptance probability that varies with the distance of the simulated
data from the observed data. The acceptance density gives the distribution of
the error term, enabling the uniform error usually used to be replaced by a
general distribution. This generalization can also be applied to approximate
Markov chain Monte Carlo algorithms. In light of this work, ABC algorithms can
be seen as calibration techniques for implicit stochastic models, inferring
parameter values in light of the computer model, data, prior beliefs about the
parameter values, and any measurement or model errors.Comment: 33 pages, 1 figure, to appear in Statistical Applications in Genetics
and Molecular Biology 201
Inverse Problems and Data Assimilation
These notes are designed with the aim of providing a clear and concise
introduction to the subjects of Inverse Problems and Data Assimilation, and
their inter-relations, together with citations to some relevant literature in
this area. The first half of the notes is dedicated to studying the Bayesian
framework for inverse problems. Techniques such as importance sampling and
Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the
desirable property that in the limit of an infinite number of samples they
reproduce the full posterior distribution. Since it is often computationally
intensive to implement these methods, especially in high dimensional problems,
approximate techniques such as approximating the posterior by a Dirac or a
Gaussian distribution are discussed. The second half of the notes cover data
assimilation. This refers to a particular class of inverse problems in which
the unknown parameter is the initial condition of a dynamical system, and in
the stochastic dynamics case the subsequent states of the system, and the data
comprises partial and noisy observations of that (possibly stochastic)
dynamical system. We will also demonstrate that methods developed in data
assimilation may be employed to study generic inverse problems, by introducing
an artificial time to generate a sequence of probability measures interpolating
from the prior to the posterior
- …