4,098 research outputs found

    Metropolis Sampling

    Full text link
    Monte Carlo (MC) sampling methods are widely applied in Bayesian inference, system simulation and optimization problems. The Markov Chain Monte Carlo (MCMC) algorithms are a well-known class of MC methods which generate a Markov chain with the desired invariant distribution. In this document, we focus on the Metropolis-Hastings (MH) sampler, which can be considered as the atom of the MCMC techniques, introducing the basic notions and different properties. We describe in details all the elements involved in the MH algorithm and the most relevant variants. Several improvements and recent extensions proposed in the literature are also briefly discussed, providing a quick but exhaustive overview of the current Metropolis-based sampling's world.Comment: Wiley StatsRef-Statistics Reference Online, 201

    A multiple-try Metropolis-Hastings algorithm with tailored proposals

    Full text link
    We present a new multiple-try Metropolis-Hastings algorithm designed to be especially beneficial when a tailored proposal distribution is available. The algorithm is based on a given acyclic graph GG, where one of the nodes in GG, kk say, contains the current state of the Markov chain and the remaining nodes contain proposed states generated by applying the tailored proposal distribution. The Metropolis-Hastings algorithm alternates between two types of updates. The first update type is using the tailored proposal distribution to generate new states in all nodes in GG except in node kk. The second update type is generating a new value for kk, thereby changing the value of the current state. We evaluate the effectiveness of the proposed scheme in an example with previously defined target and proposal distributions

    Interacting Multiple Try Algorithms with Different Proposal Distributions

    Get PDF
    We propose a new class of interacting Markov chain Monte Carlo (MCMC) algorithms designed for increasing the efficiency of a modified multiple-try Metropolis (MTM) algorithm. The extension with respect to the existing MCMC literature is twofold. The sampler proposed extends the basic MTM algorithm by allowing different proposal distributions in the multiple-try generation step. We exploit the structure of the MTM algorithm with different proposal distributions to naturally introduce an interacting MTM mechanism (IMTM) that expands the class of population Monte Carlo methods. We show the validity of the algorithm and discuss the choice of the selection weights and of the different proposals. We provide numerical studies which show that the new algorithm can perform better than the basic MTM algorithm and that the interaction mechanism allows the IMTM to efficiently explore the state space

    On the flexibility of the design of Multiple Try Metropolis schemes

    Full text link
    The Multiple Try Metropolis (MTM) method is a generalization of the classical Metropolis-Hastings algorithm in which the next state of the chain is chosen among a set of samples, according to normalized weights. In the literature, several extensions have been proposed. In this work, we show and remark upon the flexibility of the design of MTM-type methods, fulfilling the detailed balance condition. We discuss several possibilities and show different numerical results

    Order of magnitude time-reversible Markov chains and characterization of clustering processes

    Full text link
    We introduce the notion of order of magnitude reversibility (OM-reversibility) in Markov chains that are parametrized by a positive parameter \ep. OM-reversibility is a weaker condition than reversibility, and requires only the knowledge of order of magnitude of the transition probabilities. For an irreducible, OM-reversible Markov chain on a finite state space, we prove that the stationary distribution satisfies order of magnitude detailed balance (analog of detailed balance in reversible Markov chains). The result characterizes the states with positive probability in the limit of the stationary distribution as \ep \to 0, which finds an important application in the case of singularly perturbed Markov chains that are reducible for \ep=0. We show that OM-reversibility occurs naturally in macroscopic systems, involving many interacting particles. Clustering is a common phenomenon in biological systems, in which particles or molecules aggregate at one location. We give a simple condition on the transition probabilities in an interacting particle Markov chain that characterizes clustering. We show that such clustering processes are OM-reversible, and we find explicitly the order of magnitude of the stationary distribution. Further, we show that the single pole states, in which all particles are at a single vertex, are the only states with positive probability in the limit of the stationary distribution as the rate of diffusion goes to zero.Comment: 22 pages, 3 figure

    Computational Statistics

    Get PDF
    • …
    corecore