326 research outputs found

    On an adaptive preconditioned Crank-Nicolson MCMC algorithm for infinite dimensional Bayesian inferences

    Get PDF
    Many scientific and engineering problems require to perform Bayesian inferences for unknowns of infinite dimension. In such problems, many standard Markov Chain Monte Carlo (MCMC) algorithms become arbitrary slow under the mesh refinement, which is referred to as being dimension dependent. To this end, a family of dimensional independent MCMC algorithms, known as the preconditioned Crank-Nicolson (pCN) methods, were proposed to sample the infinite dimensional parameters. In this work we develop an adaptive version of the pCN algorithm, where the covariance operator of the proposal distribution is adjusted based on sampling history to improve the simulation efficiency. We show that the proposed algorithm satisfies an important ergodicity condition under some mild assumptions. Finally we provide numerical examples to demonstrate the performance of the proposed method

    MCMC methods for functions modifying old algorithms to make\ud them faster

    Get PDF
    Many problems arising in applications result in the need\ud to probe a probability distribution for functions. Examples include Bayesian nonparametric statistics and conditioned diffusion processes. Standard MCMC algorithms typically become arbitrarily slow under the mesh refinement dictated by nonparametric description of the unknown function. We describe an approach to modifying a whole range of MCMC methods which ensures that their speed of convergence is robust under mesh refinement. In the applications of interest the data is often sparse and the prior specification is an essential part of the overall modeling strategy. The algorithmic approach that we describe is applicable whenever the desired probability measure has density with respect to a Gaussian process or Gaussian random field prior, and to some useful non-Gaussian priors constructed through random truncation. Applications are shown in density estimation, data assimilation in fluid mechanics, subsurface geophysics and image registration. The key design principle is to formulate the MCMC method for functions. This leads to algorithms which can be implemented via minor modification of existing algorithms, yet which show enormous speed-up on a wide range of applied problems

    Information-geometric Markov Chain Monte Carlo methods using Diffusions

    Get PDF
    Recent work incorporating geometric ideas in Markov chain Monte Carlo is reviewed in order to highlight these advances and their possible application in a range of domains beyond Statistics. A full exposition of Markov chains and their use in Monte Carlo simulation for Statistical inference and molecular dynamics is provided, with particular emphasis on methods based on Langevin diffusions. After this geometric concepts in Markov chain Monte Carlo are introduced. A full derivation of the Langevin diffusion on a Riemannian manifold is given, together with a discussion of appropriate Riemannian metric choice for different problems. A survey of applications is provided, and some open questions are discussed.Comment: 22 pages, 2 figure

    Markov Chain Monte Carlo confidence intervals

    Full text link
    For a reversible and ergodic Markov chain {Xn,n0}\{X_n,n\geq0\} with invariant distribution π\pi, we show that a valid confidence interval for π(h)\pi(h) can be constructed whenever the asymptotic variance σP2(h)\sigma^2_P(h) is finite and positive. We do not impose any additional condition on the convergence rate of the Markov chain. The confidence interval is derived using the so-called fixed-b lag-window estimator of σP2(h)\sigma_P^2(h). We also derive a result that suggests that the proposed confidence interval procedure converges faster than classical confidence interval procedures based on the Gaussian distribution and standard central limit theorems for Markov chains.Comment: Published at http://dx.doi.org/10.3150/15-BEJ712 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Explicit convergence bounds for Metropolis Markov chains: isoperimetry, spectral gaps and profiles

    Full text link
    We derive the first explicit bounds for the spectral gap of a random walk Metropolis algorithm on RdR^d for any value of the proposal variance, which when scaled appropriately recovers the correct d1d^{-1} dependence on dimension for suitably regular invariant distributions. We also obtain explicit bounds on the L2{\rm L}^2-mixing time for a broad class of models. In obtaining these results, we refine the use of isoperimetric profile inequalities to obtain conductance profile bounds, which also enable the derivation of explicit bounds in a much broader class of models. We also obtain similar results for the preconditioned Crank--Nicolson Markov chain, obtaining dimension-independent bounds under suitable assumptions

    A hybrid adaptive MCMC algorithm in function spaces

    Full text link
    The preconditioned Crank-Nicolson (pCN) method is a Markov Chain Monte Carlo (MCMC) scheme, specifically designed to perform Bayesian inferences in function spaces. Unlike many standard MCMC algorithms, the pCN method can preserve the sampling efficiency under the mesh refinement, a property referred to as being dimension independent. In this work we consider an adaptive strategy to further improve the efficiency of pCN. In particular we develop a hybrid adaptive MCMC method: the algorithm performs an adaptive Metropolis scheme in a chosen finite dimensional subspace, and a standard pCN algorithm in the complement space of the chosen subspace. We show that the proposed algorithm satisfies certain important ergodicity conditions. Finally with numerical examples we demonstrate that the proposed method has competitive performance with existing adaptive algorithms.Comment: arXiv admin note: text overlap with arXiv:1511.0583
    corecore