997 research outputs found

    Pathwise Accuracy and Ergodicity of Metropolized Integrators for SDEs

    Full text link
    Metropolized integrators for ergodic stochastic differential equations (SDE) are proposed which (i) are ergodic with respect to the (known) equilibrium distribution of the SDE and (ii) approximate pathwise the solutions of the SDE on finite time intervals. Both these properties are demonstrated in the paper and precise strong error estimates are obtained. It is also shown that the Metropolized integrator retains these properties even in situations where the drift in the SDE is nonglobally Lipschitz, and vanilla explicit integrators for SDEs typically become unstable and fail to be ergodic.Comment: 46 pages, 5 figure

    Discussions on "Riemann manifold Langevin and Hamiltonian Monte Carlo methods"

    Full text link
    This is a collection of discussions of `Riemann manifold Langevin and Hamiltonian Monte Carlo methods" by Girolami and Calderhead, to appear in the Journal of the Royal Statistical Society, Series B.Comment: 6 pages, one figur

    Langevin and Hamiltonian based Sequential MCMC for Efficient Bayesian Filtering in High-dimensional Spaces

    Full text link
    Nonlinear non-Gaussian state-space models arise in numerous applications in statistics and signal processing. In this context, one of the most successful and popular approximation techniques is the Sequential Monte Carlo (SMC) algorithm, also known as particle filtering. Nevertheless, this method tends to be inefficient when applied to high dimensional problems. In this paper, we focus on another class of sequential inference methods, namely the Sequential Markov Chain Monte Carlo (SMCMC) techniques, which represent a promising alternative to SMC methods. After providing a unifying framework for the class of SMCMC approaches, we propose novel efficient strategies based on the principle of Langevin diffusion and Hamiltonian dynamics in order to cope with the increasing number of high-dimensional applications. Simulation results show that the proposed algorithms achieve significantly better performance compared to existing algorithms

    A patch that imparts unconditional stability to certain explicit integrators for SDEs

    Full text link
    This paper proposes a simple strategy to simulate stochastic differential equations (SDE) arising in constant temperature molecular dynamics. The main idea is to patch an explicit integrator with Metropolis accept or reject steps. The resulting `Metropolized integrator' preserves the SDE's equilibrium distribution and is pathwise accurate on finite time intervals. As a corollary the integrator can be used to estimate finite-time dynamical properties along an infinitely long solution. The paper explains how to implement the patch (even in the presence of multiple-time-stepsizes and holonomic constraints), how it scales with system size, and how much overhead it requires. We test the integrator on a Lennard-Jones cluster of particles and `dumbbells' at constant temperature.Comment: 29 pages, 5 figure

    Hamiltonian ABC

    Full text link
    Approximate Bayesian computation (ABC) is a powerful and elegant framework for performing inference in simulation-based models. However, due to the difficulty in scaling likelihood estimates, ABC remains useful for relatively low-dimensional problems. We introduce Hamiltonian ABC (HABC), a set of likelihood-free algorithms that apply recent advances in scaling Bayesian learning using Hamiltonian Monte Carlo (HMC) and stochastic gradients. We find that a small number forward simulations can effectively approximate the ABC gradient, allowing Hamiltonian dynamics to efficiently traverse parameter spaces. We also describe a new simple yet general approach of incorporating random seeds into the state of the Markov chain, further reducing the random walk behavior of HABC. We demonstrate HABC on several typical ABC problems, and show that HABC samples comparably to regular Bayesian inference using true gradients on a high-dimensional problem from machine learning.Comment: Submission to UAI 201
    corecore