235 research outputs found

    Explicit error bounds for lazy reversible Markov Chain Monte Carlo

    Full text link
    We prove explicit, i.e., non-asymptotic, error bounds for Markov Chain Monte Carlo methods, such as the Metropolis algorithm. The problem is to compute the expectation (or integral) of f with respect to a measure which can be given by a density with respect to another measure. A straight simulation of the desired distribution by a random number generator is in general not possible. Thus it is reasonable to use Markov chain sampling with a burn-in. We study such an algorithm and extend the analysis of Lovasz and Simonovits (1993) to obtain an explicit error bound

    Dimension-Independent MCMC Sampling for Inverse Problems with Non-Gaussian Priors

    Full text link
    The computational complexity of MCMC methods for the exploration of complex probability measures is a challenging and important problem. A challenge of particular importance arises in Bayesian inverse problems where the target distribution may be supported on an infinite dimensional space. In practice this involves the approximation of measures defined on sequences of spaces of increasing dimension. Motivated by an elliptic inverse problem with non-Gaussian prior, we study the design of proposal chains for the Metropolis-Hastings algorithm with dimension independent performance. Dimension-independent bounds on the Monte-Carlo error of MCMC sampling for Gaussian prior measures have already been established. In this paper we provide a simple recipe to obtain these bounds for non-Gaussian prior measures. To illustrate the theory we consider an elliptic inverse problem arising in groundwater flow. We explicitly construct an efficient Metropolis-Hastings proposal based on local proposals, and we provide numerical evidence which supports the theory.Comment: 26 pages, 7 figure

    Error bounds of MCMC for functions with unbounded stationary variance

    Full text link
    We prove explicit error bounds for Markov chain Monte Carlo (MCMC) methods to compute expectations of functions with unbounded stationary variance. We assume that there is a p∈(1,2)p\in(1,2) so that the functions have finite LpL_p-norm. For uniformly ergodic Markov chains we obtain error bounds with the optimal order of convergence n1/p−1n^{1/p-1} and if there exists a spectral gap we almost get the optimal order. Further, a burn-in period is taken into account and a recipe for choosing the burn-in is provided.Comment: 13 page

    Nonasymptotic bounds on the mean square error for MCMC estimates via renewal techniques

    Get PDF
    The Nummellin’s split chain construction allows to decompose a Markov chain Monte Carlo (MCMC) trajectory into i.i.d. "excursions". Regenerative MCMC algorithms based on this technique use a random number of samples. They have been proposed as a promising alternative to usual fixed length simulation [25, 33, 14]. In this note we derive nonasymptotic bounds on the mean square error (MSE) of regenerative MCMC estimates via techniques of renewal theory and sequential statistics. These results are applied to costruct confidence intervals. We then focus on two cases of particular interest: chains satisfying the Doeblin condition and a geometric drift condition. Available explicit nonasymptotic results are compared for different schemes of MCMC simulation

    Nonasymptotic bounds on the estimation error of MCMC algorithms

    Full text link
    We address the problem of upper bounding the mean square error of MCMC estimators. Our analysis is nonasymptotic. We first establish a general result valid for essentially all ergodic Markov chains encountered in Bayesian computation and a possibly unbounded target function ff. The bound is sharp in the sense that the leading term is exactly σas2(P,f)/n\sigma_{\mathrm {as}}^2(P,f)/n, where σas2(P,f)\sigma_{\mathrm{as}}^2(P,f) is the CLT asymptotic variance. Next, we proceed to specific additional assumptions and give explicit computable bounds for geometrically and polynomially ergodic Markov chains under quantitative drift conditions. As a corollary, we provide results on confidence estimation.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ442 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm). arXiv admin note: text overlap with arXiv:0907.491

    Error bounds for computing the expectation by Markov chain Monte Carlo

    Full text link
    We study the error of reversible Markov chain Monte Carlo methods for approximating the expectation of a function. Explicit error bounds with respect to different norms of the function are proven. By the estimation the well known asymptotical limit of the error is attained, i.e. there is no gap between the estimate and the asymptotical behavior. We discuss the dependence of the error on a burn-in of the Markov chain. Furthermore we suggest and justify a specific burn-in for optimizing the algorithm
    • 

    corecore