Adaptive and interacting Markov chain Monte Carlo algorithms (MCMC) have been
recently introduced in the literature. These novel simulation algorithms are
designed to increase the simulation efficiency to sample complex distributions.
Motivated by some recently introduced algorithms (such as the adaptive
Metropolis algorithm and the interacting tempering algorithm), we develop a
general methodological and theoretical framework to establish both the
convergence of the marginal distribution and a strong law of large numbers.
This framework weakens the conditions introduced in the pioneering paper by
Roberts and Rosenthal [J. Appl. Probab. 44 (2007) 458--475]. It also covers the
case when the target distribution π is sampled by using Markov transition
kernels with a stationary distribution that differs from π.Comment: Published in at http://dx.doi.org/10.1214/11-AOS938 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org