11 research outputs found

    Error bounds of MCMC for functions with unbounded stationary variance

    Full text link
    We prove explicit error bounds for Markov chain Monte Carlo (MCMC) methods to compute expectations of functions with unbounded stationary variance. We assume that there is a p∈(1,2)p\in(1,2) so that the functions have finite LpL_p-norm. For uniformly ergodic Markov chains we obtain error bounds with the optimal order of convergence n1/p−1n^{1/p-1} and if there exists a spectral gap we almost get the optimal order. Further, a burn-in period is taken into account and a recipe for choosing the burn-in is provided.Comment: 13 page

    Time series prediction via aggregation : an oracle bound including numerical cost

    Full text link
    We address the problem of forecasting a time series meeting the Causal Bernoulli Shift model, using a parametric set of predictors. The aggregation technique provides a predictor with well established and quite satisfying theoretical properties expressed by an oracle inequality for the prediction risk. The numerical computation of the aggregated predictor usually relies on a Markov chain Monte Carlo method whose convergence should be evaluated. In particular, it is crucial to bound the number of simulations needed to achieve a numerical precision of the same order as the prediction risk. In this direction we present a fairly general result which can be seen as an oracle inequality including the numerical cost of the predictor computation. The numerical cost appears by letting the oracle inequality depend on the number of simulations required in the Monte Carlo approximation. Some numerical experiments are then carried out to support our findings

    Dimension-Independent MCMC Sampling for Inverse Problems with Non-Gaussian Priors

    Full text link
    The computational complexity of MCMC methods for the exploration of complex probability measures is a challenging and important problem. A challenge of particular importance arises in Bayesian inverse problems where the target distribution may be supported on an infinite dimensional space. In practice this involves the approximation of measures defined on sequences of spaces of increasing dimension. Motivated by an elliptic inverse problem with non-Gaussian prior, we study the design of proposal chains for the Metropolis-Hastings algorithm with dimension independent performance. Dimension-independent bounds on the Monte-Carlo error of MCMC sampling for Gaussian prior measures have already been established. In this paper we provide a simple recipe to obtain these bounds for non-Gaussian prior measures. To illustrate the theory we consider an elliptic inverse problem arising in groundwater flow. We explicitly construct an efficient Metropolis-Hastings proposal based on local proposals, and we provide numerical evidence which supports the theory.Comment: 26 pages, 7 figure

    Nonasymptotic bounds on the estimation error of MCMC algorithms

    Full text link
    We address the problem of upper bounding the mean square error of MCMC estimators. Our analysis is nonasymptotic. We first establish a general result valid for essentially all ergodic Markov chains encountered in Bayesian computation and a possibly unbounded target function ff. The bound is sharp in the sense that the leading term is exactly σas2(P,f)/n\sigma_{\mathrm {as}}^2(P,f)/n, where σas2(P,f)\sigma_{\mathrm{as}}^2(P,f) is the CLT asymptotic variance. Next, we proceed to specific additional assumptions and give explicit computable bounds for geometrically and polynomially ergodic Markov chains under quantitative drift conditions. As a corollary, we provide results on confidence estimation.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ442 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm). arXiv admin note: text overlap with arXiv:0907.491

    Information Geometry Approach to Parameter Estimation in Markov Chains

    Full text link
    We consider the parameter estimation of Markov chain when the unknown transition matrix belongs to an exponential family of transition matrices. Then, we show that the sample mean of the generator of the exponential family is an asymptotically efficient estimator. Further, we also define a curved exponential family of transition matrices. Using a transition matrix version of the Pythagorean theorem, we give an asymptotically efficient estimator for a curved exponential family.Comment: Appendix D is adde

    Spectral gaps for a Metropolis-Hastings algorithm in infinite dimensions

    Get PDF
    We study the problem of sampling high and infinite dimensional target measures arising in applications such as conditioned diffusions and inverse problems. We focus on those that arise from approximating measures on Hilbert spaces defined via a density with respect to a Gaussian reference measure. We consider the Metropolis-Hastings algorithm that adds an accept-reject mechanism to a Markov chain proposal in order to make the chain reversible with respect to the target measure. We focus on cases where the proposal is either a Gaussian random walk (RWM) with covariance equal to that of the reference measure or an Ornstein-Uhlenbeck proposal (pCN) for which the reference measure is invariant. Previous results in terms of scaling and diffusion limits suggested that the pCN has a convergence rate that is independent of the dimension while the RWM method has undesirable dimension-dependent behaviour. We confirm this claim by exhibiting a dimension-independent Wasserstein spectral gap for pCN algorithm for a large class of target measures. In our setting this Wasserstein spectral gap implies an L2L^2-spectral gap. We use both spectral gaps to show that the ergodic average satisfies a strong law of large numbers, the central limit theorem and nonasymptotic bounds on the mean square error, all dimension independent. In contrast we show that the spectral gap of the RWM algorithm applied to the reference measures degenerates as the dimension tends to infinity.Comment: Published in at http://dx.doi.org/10.1214/13-AAP982 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Rigorous confidence bounds for MCMC under a geometric drift condition

    Get PDF
    We assume a drift condition towards a small set and bound the mean square error of estimators obtained by taking averages along a single trajectory of a Markov chain Monte Carlo algorithm. We use these bounds to construct fixed-width nonasymptotic confidence intervals. For a possibly unbounded function f:X→R, let be the value of interest and its MCMC estimate. Precisely, we derive lower bounds for the length of the trajectory n and burn-in time t which ensure that The bounds depend only and explicitly on drift parameters, on the V-norm of f, where V is the drift function and on precision and confidence parameters . Next we analyze an MCMC estimator based on the median of multiple shorter runs that allows for sharper bounds for the required total simulation cost. In particular the methodology can be applied for computing posterior quantities in practically relevant models. We illustrate our bounds numerically in a simple example
    corecore