13,246 research outputs found

    Estimating the spectral gap of a trace-class Markov operator

    Full text link
    The utility of a Markov chain Monte Carlo algorithm is, in large part, determined by the size of the spectral gap of the corresponding Markov operator. However, calculating (and even approximating) the spectral gaps of practical Monte Carlo Markov chains in statistics has proven to be an extremely difficult and often insurmountable task, especially when these chains move on continuous state spaces. In this paper, a method for accurate estimation of the spectral gap is developed for general state space Markov chains whose operators are non-negative and trace-class. The method is based on the fact that the second largest eigenvalue (and hence the spectral gap) of such operators can be bounded above and below by simple functions of the power sums of the eigenvalues. These power sums often have nice integral representations. A classical Monte Carlo method is proposed to estimate these integrals, and a simple sufficient condition for finite variance is provided. This leads to asymptotically valid confidence intervals for the second largest eigenvalue (and the spectral gap) of the Markov operator. In contrast with previously existing techniques, our method is not based on a near-stationary version of the Markov chain, which, paradoxically, cannot be obtained in a principled manner without bounds on the spectral gap. On the other hand, it can be quite expensive from a computational standpoint. The efficiency of the method is studied both theoretically and empirically

    Geometric ergodicity of trans-dimensional Markov chain Monte Carlo algorithms

    Full text link
    This article studies the convergence properties of trans-dimensional MCMC algorithms when the total number of models is finite. It is shown that, for reversible and some non-reversible trans-dimensional Markov chains, under mild conditions, geometric convergence is guaranteed if the Markov chains associated with the within-model moves are geometrically ergodic. This result is proved in an L2L^2 framework using the technique of Markov chain decomposition. While the technique was previously developed for reversible chains, this work extends it to the point that it can be applied to some commonly used non-reversible chains. Under geometric convergence, a central limit theorem holds for ergodic averages, even in the absence of Harris ergodicity. This allows for the construction of simultaneous confidence intervals for features of the target distribution. This procedure is rigorously examined in a trans-dimensional setting, and special attention is given to the case where the asymptotic covariance matrix in the central limit theorem is singular. The theory and methodology herein are applied to reversible jump algorithms for two Bayesian models: an autoregression with Laplace errors and unknown model order, and a probit regression with variable selection

    Analysis of two-component Gibbs samplers using the theory of two projections

    Full text link
    The theory of two projections is utilized to study two-component Gibbs samplers. Through this theory, previously intractable problems regarding the asymptotic variances of two-component Gibbs samplers are reduced to elementary matrix algebra exercises. It is found that in terms of asymptotic variance, the two-component random-scan Gibbs sampler is never much worse, and could be considerably better than its deterministic-scan counterpart, provided that the selection probability is appropriately chosen. This is especially the case when there is a large discrepancy in computation cost between the two components. The result contrasts with the known fact that the deterministic-scan version has a faster convergence rate. A modified version of the deterministic-scan sampler that accounts for computation cost behaves similarly to the random-scan version. As a side product, some general formulas for characterizing the convergence rate of a possibly non-reversible or time-inhomogeneous Markov chain in an operator theoretic framework are developed
    • …
    corecore