738 research outputs found

    Adaptive Gibbs samplers

    Get PDF
    We consider various versions of adaptive Gibbs and Metropolis- within-Gibbs samplers, which update their selection probabilities (and perhaps also their proposal distributions) on the fly during a run, by learning as they go in an attempt to optimise the algorithm. We present a cautionary example of how even a simple-seeming adaptive Gibbs sampler may fail to converge. We then present various positive results guaranteeing convergence of adaptive Gibbs samplers under certain conditions

    Complexity Results for MCMC derived from Quantitative Bounds

    Full text link
    This paper considers how to obtain MCMC quantitative convergence bounds which can be translated into tight complexity bounds in high-dimensional settings. We propose a modified drift-and-minorization approach, which establishes a generalized drift condition defined in a subset of the state space. The subset is called the ``large set'' and is chosen to rule out some ``bad'' states which have poor drift property when the dimension gets large. Using the ``large set'' together with a ``centered'' drift function, a quantitative bound can be obtained which can be translated into a tight complexity bound. As a demonstration, we analyze a certain realistic Gibbs sampler algorithm and obtain a complexity upper bound for the mixing time, which shows that the number of iterations required for the Gibbs sampler to converge is constant under certain conditions on the observed data and the initial state. It is our hope that this modified drift-and-minorization approach can be employed in many other specific examples to obtain complexity bounds for high-dimensional Markov chains.Comment: 42 page

    Detecting multiple authorship of United States Supreme Court legal decisions using function words

    Get PDF
    This paper uses statistical analysis of function words used in legal judgments written by United States Supreme Court justices, to determine which justices have the most variable writing style (which may indicated greater reliance on their law clerks when writing opinions), and also the extent to which different justices' writing styles are distinguishable from each other.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS378 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Harris recurrence of Metropolis-within-Gibbs and trans-dimensional Markov chains

    Full text link
    A ϕ\phi-irreducible and aperiodic Markov chain with stationary probability distribution will converge to its stationary distribution from almost all starting points. The property of Harris recurrence allows us to replace ``almost all'' by ``all,'' which is potentially important when running Markov chain Monte Carlo algorithms. Full-dimensional Metropolis--Hastings algorithms are known to be Harris recurrent. In this paper, we consider conditions under which Metropolis-within-Gibbs and trans-dimensional Markov chains are or are not Harris recurrent. We present a simple but natural two-dimensional counter-example showing how Harris recurrence can fail, and also a variety of positive results which guarantee Harris recurrence. We also present some open problems. We close with a discussion of the practical implications for MCMC algorithms.Comment: Published at http://dx.doi.org/10.1214/105051606000000510 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Minimising MCMC variance via diffusion limits, with an application to simulated tempering

    Get PDF
    We derive new results comparing the asymptotic variance of diffusions by writing them as appropriate limits of discrete-time birth-death chains which themselves satisfy Peskun orderings. We then apply our results to simulated tempering algorithms to establish which choice of inverse temperatures minimises the asymptotic variance of all functionals and thus leads to the most efficient MCMC algorithm.Comment: Published in at http://dx.doi.org/10.1214/12-AAP918 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore