2,202 research outputs found

    Convergence analysis of block Gibbs samplers for Bayesian linear mixed models with p>Np>N

    Full text link
    Exploration of the intractable posterior distributions associated with Bayesian versions of the general linear mixed model is often performed using Markov chain Monte Carlo. In particular, if a conditionally conjugate prior is used, then there is a simple two-block Gibbs sampler available. Rom\'{a}n and Hobert [Linear Algebra Appl. 473 (2015) 54-77] showed that, when the priors are proper and the XX matrix has full column rank, the Markov chains underlying these Gibbs samplers are nearly always geometrically ergodic. In this paper, Rom\'{a}n and Hobert's (2015) result is extended by allowing improper priors on the variance components, and, more importantly, by removing all assumptions on the XX matrix. So, not only is XX allowed to be (column) rank deficient, which provides additional flexibility in parameterizing the fixed effects, it is also allowed to have more columns than rows, which is necessary in the increasingly important situation where p>Np>N. The full rank assumption on XX is at the heart of Rom\'{a}n and Hobert's (2015) proof. Consequently, the extension to unrestricted XX requires a substantially different analysis.Comment: Published at http://dx.doi.org/10.3150/15-BEJ749 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Sufficient burn-in for Gibbs samplers for a hierarchical random effects model

    Full text link
    We consider Gibbs and block Gibbs samplers for a Bayesian hierarchical version of the one-way random effects model. Drift and minorization conditions are established for the underlying Markov chains. The drift and minorization are used in conjunction with results from J. S. Rosenthal [J. Amer. Statist. Assoc. 90 (1995) 558-566] and G. O. Roberts and R. L. Tweedie [Stochastic Process. Appl. 80 (1999) 211-229] to construct analytical upper bounds on the distance to stationarity. These lead to upper bounds on the amount of burn-in that is required to get the chain within a prespecified (total variation) distance of the stationary distribution. The results are illustrated with a numerical example

    Estimating the spectral gap of a trace-class Markov operator

    Full text link
    The utility of a Markov chain Monte Carlo algorithm is, in large part, determined by the size of the spectral gap of the corresponding Markov operator. However, calculating (and even approximating) the spectral gaps of practical Monte Carlo Markov chains in statistics has proven to be an extremely difficult and often insurmountable task, especially when these chains move on continuous state spaces. In this paper, a method for accurate estimation of the spectral gap is developed for general state space Markov chains whose operators are non-negative and trace-class. The method is based on the fact that the second largest eigenvalue (and hence the spectral gap) of such operators can be bounded above and below by simple functions of the power sums of the eigenvalues. These power sums often have nice integral representations. A classical Monte Carlo method is proposed to estimate these integrals, and a simple sufficient condition for finite variance is provided. This leads to asymptotically valid confidence intervals for the second largest eigenvalue (and the spectral gap) of the Markov operator. In contrast with previously existing techniques, our method is not based on a near-stationary version of the Markov chain, which, paradoxically, cannot be obtained in a principled manner without bounds on the spectral gap. On the other hand, it can be quite expensive from a computational standpoint. The efficiency of the method is studied both theoretically and empirically

    When is Eaton's Markov chain irreducible?

    Full text link
    Consider a parametric statistical model P(dx∣θ)P(\mathrm{d}x|\theta) and an improper prior distribution ν(dθ)\nu(\mathrm{d}\theta) that together yield a (proper) formal posterior distribution Q(dθ∣x)Q(\mathrm{d}\theta|x). The prior is called strongly admissible if the generalized Bayes estimator of every bounded function of θ\theta is admissible under squared error loss. Eaton [Ann. Statist. 20 (1992) 1147--1179] has shown that a sufficient condition for strong admissibility of ν\nu is the local recurrence of the Markov chain whose transition function is R(θ,dη)=∫Q(dη∣x)P(dx∣θ)R(\theta,\mathrm{d}\eta)=\int Q(\mathrm{d}\eta|x)P(\mathrm {d}x|\theta). Applications of this result and its extensions are often greatly simplified when the Markov chain associated with RR is irreducible. However, establishing irreducibility can be difficult. In this paper, we provide a characterization of irreducibility for general state space Markov chains and use this characterization to develop an easily checked, necessary and sufficient condition for irreducibility of Eaton's Markov chain. All that is required to check this condition is a simple examination of PP and ν\nu. Application of the main result is illustrated using two examples.Comment: Published at http://dx.doi.org/10.3150/07-BEJ6191 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
    • …
    corecore