169 research outputs found
Interacting Markov chain Monte Carlo methods for solving nonlinear measure-valued equations
We present a new class of interacting Markov chain Monte Carlo algorithms for
solving numerically discrete-time measure-valued equations. The associated
stochastic processes belong to the class of self-interacting Markov chains. In
contrast to traditional Markov chains, their time evolutions depend on the
occupation measure of their past values. This general methodology allows us to
provide a natural way to sample from a sequence of target probability measures
of increasing complexity. We develop an original theoretical analysis to
analyze the behavior of these iterative algorithms which relies on
measure-valued processes and semigroup techniques. We establish a variety of
convergence results including exponential estimates and a uniform convergence
theorem with respect to the number of target distributions. We also illustrate
these algorithms in the context of Feynman-Kac distribution flows.Comment: Published in at http://dx.doi.org/10.1214/09-AAP628 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A Backward Particle Interpretation of Feynman-Kac Formulae
We design a particle interpretation of Feynman-Kac measures on path spaces
based on a backward Markovian representation combined with a traditional mean
field particle interpretation of the flow of their final time marginals. In
contrast to traditional genealogical tree based models, these new particle
algorithms can be used to compute normalized additive functionals "on-the-fly"
as well as their limiting occupation measures with a given precision degree
that does not depend on the final time horizon.
We provide uniform convergence results w.r.t. the time horizon parameter as
well as functional central limit theorems and exponential concentration
estimates. We also illustrate these results in the context of computational
physics and imaginary time Schroedinger type partial differential equations,
with a special interest in the numerical approximation of the invariant measure
associated to -processes
Uniform Stability of a Particle Approximation of the Optimal Filter Derivative
Sequential Monte Carlo methods, also known as particle methods, are a widely
used set of computational tools for inference in non-linear non-Gaussian
state-space models. In many applications it may be necessary to compute the
sensitivity, or derivative, of the optimal filter with respect to the static
parameters of the state-space model; for instance, in order to obtain maximum
likelihood model parameters of interest, or to compute the optimal controller
in an optimal control problem. In Poyiadjis et al. [2011] an original particle
algorithm to compute the filter derivative was proposed and it was shown using
numerical examples that the particle estimate was numerically stable in the
sense that it did not deteriorate over time. In this paper we substantiate this
claim with a detailed theoretical study. Lp bounds and a central limit theorem
for this particle approximation of the filter derivative are presented. It is
further shown that under mixing conditions these Lp bounds and the asymptotic
variance characterized by the central limit theorem are uniformly bounded with
respect to the time index. We demon- strate the performance predicted by theory
with several numerical examples. We also use the particle approximation of the
filter derivative to perform online maximum likelihood parameter estimation for
a stochastic volatility model
On adaptive resampling strategies for sequential Monte Carlo methods
Sequential Monte Carlo (SMC) methods are a class of techniques to sample
approximately from any sequence of probability distributions using a
combination of importance sampling and resampling steps. This paper is
concerned with the convergence analysis of a class of SMC methods where the
times at which resampling occurs are computed online using criteria such as the
effective sample size. This is a popular approach amongst practitioners but
there are very few convergence results available for these methods. By
combining semigroup techniques with an original coupling argument, we obtain
functional central limit theorems and uniform exponential concentration
estimates for these algorithms.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ335 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
A note on convergence of the equi-energy sampler
In a recent paper `The equi-energy sampler with applications statistical
inference and statistical mechanics' [Ann. Stat. 34 (2006) 1581--1619], Kou,
Zhou & Wong have presented a new stochastic simulation method called the
equi-energy (EE) sampler. This technique is designed to simulate from a
probability measure , perhaps only known up to a normalizing constant. The
authors demonstrate that the sampler performs well in quite challenging
problems but their convergence results (Theorem 2) appear incomplete. This was
pointed out, in the discussion of the paper, by Atchad\'e & Liu (2006) who
proposed an alternative convergence proof. However, this alternative proof,
whilst theoretically correct, does not correspond to the algorithm that is
implemented. In this note we provide a new proof of convergence of the
equi-energy sampler based on the Poisson equation and on the theory developed
in Andrieu et al. (2007) for \emph{Non-Linear} Markov chain Monte Carlo (MCMC).
The objective of this note is to provide a proof of correctness of the EE
sampler when there is only one feeding chain; the general case requires a much
more technical approach than is suitable for a short note. In addition, we also
seek to highlight the difficulties associated with the analysis of this type of
algorithm and present the main techniques that may be adopted to prove the
convergence of it
On nonlinear Markov chain Monte Carlo
Let be the space of probability measures on a measurable
space . In this paper we introduce a class of nonlinear Markov
chain Monte Carlo (MCMC) methods for simulating from a probability measure
. Nonlinear Markov kernels (see [Feynman--Kac Formulae:
Genealogical and Interacting Particle Systems with Applications (2004)
Springer]) can be
constructed to, in some sense, improve over MCMC methods. However, such
nonlinear kernels cannot be simulated exactly, so approximations of the
nonlinear kernels are constructed using auxiliary or potentially
self-interacting chains. Several nonlinear kernels are presented and it is
demonstrated that, under some conditions, the associated approximations exhibit
a strong law of large numbers; our proof technique is via the Poisson equation
and Foster--Lyapunov conditions. We investigate the performance of our
approximations with some simulations.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ307 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
A Lognormal Central Limit Theorem for Particle Approximations of Normalizing Constants
International audienceThis paper deals with the numerical approximation of normalizing constants produced by particle methods, in the general framework of Feynman-Kac sequences of measures. It is well-known that the corresponding estimates satisfy a central limit theorem for a fixed time horizon n as the number of particles N goes to infinity. Here, we study the situation where both n and N go to infinity in such a way that lim n→∞ . In this context, Pitt et al. \cite{pitt2012} recently conjectured that a lognormal central limit theorem should hold. We formally establish this result here, under general regularity assumptions on the model. We also discuss special classes of models (time-homogeneous environment and ergodic random environment) for which more explicit descriptions of the limiting bias and variance can be obtained
Particle approximation of the intensity measures of a spatial branching point process arising in multi-target tracking
The aim of this paper is two-fold. First we analyze the sequence of intensity
measures of a spatial branching point process arising in a multiple target
tracking context. We study its stability properties, characterize its long time
behavior and provide a series of weak Lipschitz type functional contraction
inequalities. Second we design and analyze an original particle scheme to
approximate numerically these intensity measures. Under appropriate regularity
conditions, we obtain uniform and non asymptotic estimates and a functional
central limit theorem. To the best of our knowledge, these are the first sharp
theoretical results available for this class of spatial branching point
processes.Comment: Revised version Technical report INRIA HAL-INRIA RR-723
- …