5,029 research outputs found

    Nonasymptotic bounds on the estimation error of MCMC algorithms

    Full text link
    We address the problem of upper bounding the mean square error of MCMC estimators. Our analysis is nonasymptotic. We first establish a general result valid for essentially all ergodic Markov chains encountered in Bayesian computation and a possibly unbounded target function ff. The bound is sharp in the sense that the leading term is exactly σas2(P,f)/n\sigma_{\mathrm {as}}^2(P,f)/n, where σas2(P,f)\sigma_{\mathrm{as}}^2(P,f) is the CLT asymptotic variance. Next, we proceed to specific additional assumptions and give explicit computable bounds for geometrically and polynomially ergodic Markov chains under quantitative drift conditions. As a corollary, we provide results on confidence estimation.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ442 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm). arXiv admin note: text overlap with arXiv:0907.491

    Convergence of adaptive and interacting Markov chain Monte Carlo algorithms

    Full text link
    Adaptive and interacting Markov chain Monte Carlo algorithms (MCMC) have been recently introduced in the literature. These novel simulation algorithms are designed to increase the simulation efficiency to sample complex distributions. Motivated by some recently introduced algorithms (such as the adaptive Metropolis algorithm and the interacting tempering algorithm), we develop a general methodological and theoretical framework to establish both the convergence of the marginal distribution and a strong law of large numbers. This framework weakens the conditions introduced in the pioneering paper by Roberts and Rosenthal [J. Appl. Probab. 44 (2007) 458--475]. It also covers the case when the target distribution π\pi is sampled by using Markov transition kernels with a stationary distribution that differs from π\pi.Comment: Published in at http://dx.doi.org/10.1214/11-AOS938 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Variance bounding and geometric ergodicity of Markov chain Monte Carlo kernels for approximate Bayesian computation

    Get PDF
    Approximate Bayesian computation has emerged as a standard computational tool when dealing with the increasingly common scenario of completely intractable likelihood functions in Bayesian inference. We show that many common Markov chain Monte Carlo kernels used to facilitate inference in this setting can fail to be variance bounding, and hence geometrically ergodic, which can have consequences for the reliability of estimates in practice. This phenomenon is typically independent of the choice of tolerance in the approximation. We then prove that a recently introduced Markov kernel in this setting can inherit variance bounding and geometric ergodicity from its intractable Metropolis--Hastings counterpart, under reasonably weak and manageable conditions. We show that the computational cost of this alternative kernel is bounded whenever the prior is proper, and present indicative results on an example where spectral gaps and asymptotic variances can be computed, as well as an example involving inference for a partially and discretely observed, time-homogeneous, pure jump Markov process. We also supply two general theorems, one of which provides a simple sufficient condition for lack of variance bounding for reversible kernels and the other provides a positive result concerning inheritance of variance bounding and geometric ergodicity for mixtures of reversible kernels.Comment: 26 pages, 10 figure
    corecore