We present a new class of interacting Markov chain Monte Carlo algorithms for
solving numerically discrete-time measure-valued equations. The associated
stochastic processes belong to the class of self-interacting Markov chains. In
contrast to traditional Markov chains, their time evolutions depend on the
occupation measure of their past values. This general methodology allows us to
provide a natural way to sample from a sequence of target probability measures
of increasing complexity. We develop an original theoretical analysis to
analyze the behavior of these iterative algorithms which relies on
measure-valued processes and semigroup techniques. We establish a variety of
convergence results including exponential estimates and a uniform convergence
theorem with respect to the number of target distributions. We also illustrate
these algorithms in the context of Feynman-Kac distribution flows.Comment: Published in at http://dx.doi.org/10.1214/09-AAP628 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org