10 research outputs found

    Stability of adversarial Markov chains, with an application to adaptive MCMC algorithms

    Get PDF
    We consider whether ergodic Markov chains with bounded step size remain bounded in probability when their transitions are modified by an adversary on a bounded subset. We provide counterexamples to show that the answer is no in general, and prove theorems to show that the answer is yes under various additional assumptions. We then use our results to prove convergence of various adaptive Markov chain Monte Carlo algorithms.Comment: Published at http://dx.doi.org/10.1214/14-AAP1083 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Rate-optimal refinement strategies for local approximation MCMC

    Full text link
    Many Bayesian inference problems involve target distributions whose density functions are computationally expensive to evaluate. Replacing the target density with a local approximation based on a small number of carefully chosen density evaluations can significantly reduce the computational expense of Markov chain Monte Carlo (MCMC) sampling. Moreover, continual refinement of the local approximation can guarantee asymptotically exact sampling. We devise a new strategy for balancing the decay rate of the bias due to the approximation with that of the MCMC variance. We prove that the error of the resulting local approximation MCMC (LA-MCMC) algorithm decays at roughly the expected 1/T1/\sqrt{T} rate, and we demonstrate this rate numerically. We also introduce an algorithmic parameter that guarantees convergence given very weak tail bounds, significantly strengthening previous convergence results. Finally, we apply LA-MCMC to a computationally intensive Bayesian inverse problem arising in groundwater hydrology.Comment: 32 pages, 17 figure

    Bayesian computation: a summary of the current state, and samples backwards and forwards

    Full text link

    Adapting the Gibbs sampler

    Get PDF
    In the present thesis, we close a methodological gap of optimising the basic Markov Chain Monte Carlo algorithms. Similarly to the straightforward and computationally efficient optimisation criteria for the Metropolis algorithm acceptance rate (and, equivalently, proposal scale), we develop criteria for optimising the selection probabilities of the Random Scan Gibbs Sampler. We develop a general purpose Adaptive Random Scan Gibbs Sampler, that adapts the selection probabilities, gradually, as further information is accrued by the sampler. We argue that Adaptive Random Scan Gibbs Samplers can be routinely implemented and substantial computational gains will be observed across many typical Gibbs sampling problems. Additionally, motivated to develop theory to analyse convergence properties of the Adaptive Gibbs Sampler, we introduce a class of Adapted Increasingly Rarely Markov Chain Monte Carlo (AirMCMC) algorithms, where the underlying Markov kernel is allowed to be changed based on the whole available chain output, but only at specific time points separated by an increasing number of iterations. The main motivation is the ease of analysis of such algorithms. Under regularity assumptions, we prove the Mean Square Error convergence, Weak and Strong Laws of Large Numbers, and the Central Limit Theorem and discuss how our approach extends the existing results. We argue that many of the known Adaptive MCMC algorithms may be transformed into the corresponding Air versions and provide an empirical evidence that performance of the Air version remains virtually the same

    The containment condition and AdapFail algorithms

    No full text
    This short note investigates convergence of adaptive MCMC algorithms, i.e.\ algorithms which modify the Markov chain update probabilities on the fly. We focus on the Containment condition introduced in \cite{roberts2007coupling}. We show that if the Containment condition is \emph{not} satisfied, then the algorithm will perform very poorly. Specifically, with positive probability, the adaptive algorithm will be asymptotically less efficient then \emph{any} nonadaptive ergodic MCMC algorithm. We call such algorithms \texttt{AdapFail}, and conclude that they should not be used

    The Containment Condition and Adapfail Algorithms

    No full text
    corecore