5,858 research outputs found

    Harris recurrence of Metropolis-within-Gibbs and trans-dimensional Markov chains

    Full text link
    A ϕ\phi-irreducible and aperiodic Markov chain with stationary probability distribution will converge to its stationary distribution from almost all starting points. The property of Harris recurrence allows us to replace ``almost all'' by ``all,'' which is potentially important when running Markov chain Monte Carlo algorithms. Full-dimensional Metropolis--Hastings algorithms are known to be Harris recurrent. In this paper, we consider conditions under which Metropolis-within-Gibbs and trans-dimensional Markov chains are or are not Harris recurrent. We present a simple but natural two-dimensional counter-example showing how Harris recurrence can fail, and also a variety of positive results which guarantee Harris recurrence. We also present some open problems. We close with a discussion of the practical implications for MCMC algorithms.Comment: Published at http://dx.doi.org/10.1214/105051606000000510 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Minimising MCMC variance via diffusion limits, with an application to simulated tempering

    Get PDF
    We derive new results comparing the asymptotic variance of diffusions by writing them as appropriate limits of discrete-time birth-death chains which themselves satisfy Peskun orderings. We then apply our results to simulated tempering algorithms to establish which choice of inverse temperatures minimises the asymptotic variance of all functionals and thus leads to the most efficient MCMC algorithm.Comment: Published in at http://dx.doi.org/10.1214/12-AAP918 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    How Much Help Is Exchanged in Families? Towards an Understanding of Discrepant Research Findings

    Get PDF
    Responding to claims that contemporary families had abandoned their elderly members, gerontologists over the past 30 years have provided extensive documentation of intergenerational familial support. These studies have been lodged within conceptual frameworks of the modified extended family, intergenerational solidarity, and, more recently, intergenerational equity. By and large, studies claim to have found extensive levels of support. Closer examination of findings from various studies, however, reveals widely discrepant findings in terms of amounts of help given to and received by older family members. This paper examines the findings from four representative Canadian and American studies spanning four decades. Factors contributing to discrepant findings are identified at both methodological and conceptual levels, and implications for future research are discussed.intergenerational support

    How Much Help Is Exchanged in Families? Towards an Understanding of Discrepant Research Finding

    Get PDF
    Responding to claims that contemporary families had abandoned their elderly members, gerontologists over the past 30 years have provided extensive documentation of intergenerational familial support. These studies have been lodged within conceptual frameworks of the modified extended family, intergenerational solidarity, and, more recently, intergenerational equity. By and large, studies claim to have found extensive levels of support. Closer examination of findings from various studies, however, reveals widely discrepant findings in terms of amounts of help given to and received by older family members. This paper examines the findings from four representative Canadian and American studies spanning four decades. Factors contributing to discrepant findings are identified at both methodological and conceptual levels, and implications for future research are discussed.intergenerational support

    Adaptive Gibbs samplers and related MCMC methods

    Full text link
    We consider various versions of adaptive Gibbs and Metropolis-within-Gibbs samplers, which update their selection probabilities (and perhaps also their proposal distributions) on the fly during a run by learning as they go in an attempt to optimize the algorithm. We present a cautionary example of how even a simple-seeming adaptive Gibbs sampler may fail to converge. We then present various positive results guaranteeing convergence of adaptive Gibbs samplers under certain conditions.Comment: Published in at http://dx.doi.org/10.1214/11-AAP806 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: substantial text overlap with arXiv:1001.279

    Weight-Preserving Simulated Tempering

    Get PDF
    Simulated tempering is popular method of allowing MCMC algorithms to move between modes of a multimodal target density {\pi}. One problem with simulated tempering for multimodal targets is that the weights of the various modes change for different inverse-temperature values, sometimes dramatically so. In this paper, we provide a fix to overcome this problem, by adjusting the mode weights to be preserved (i.e., constant) over different inverse-temperature settings. We then apply simulated tempering algorithms to multimodal targets using our mode weight correction. We present simulations in which our weight-preserving algorithm mixes between modes much more successfully than traditional tempering algorithms. We also prove a diffusion limit for an version of our algorithm, which shows that under appropriate assumptions, our algorithm mixes in time O(d [log d]^2)

    MEXIT: Maximal un-coupling times for stochastic processes

    Get PDF
    Classical coupling constructions arrange for copies of the \emph{same} Markov process started at two \emph{different} initial states to become equal as soon as possible. In this paper, we consider an alternative coupling framework in which one seeks to arrange for two \emph{different} Markov (or other stochastic) processes to remain equal for as long as possible, when started in the \emph{same} state. We refer to this "un-coupling" or "maximal agreement" construction as \emph{MEXIT}, standing for "maximal exit". After highlighting the importance of un-coupling arguments in a few key statistical and probabilistic settings, we develop an explicit \MEXIT construction for stochastic processes in discrete time with countable state-space. This construction is generalized to random processes on general state-space running in continuous time, and then exemplified by discussion of \MEXIT for Brownian motions with two different constant drifts.Comment: 28 page

    Stability of adversarial Markov chains, with an application to adaptive MCMC algorithms

    Get PDF
    We consider whether ergodic Markov chains with bounded step size remain bounded in probability when their transitions are modified by an adversary on a bounded subset. We provide counterexamples to show that the answer is no in general, and prove theorems to show that the answer is yes under various additional assumptions. We then use our results to prove convergence of various adaptive Markov chain Monte Carlo algorithms.Comment: Published at http://dx.doi.org/10.1214/14-AAP1083 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Variance bounding Markov chains

    Full text link
    We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all L2L^2 functionals. Also, variance bounding (unlike geometric ergodicity) is preserved under the Peskun order. We close with some applications to Metropolis--Hastings algorithms.Comment: Published in at http://dx.doi.org/10.1214/07-AAP486 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore