41 research outputs found

    Efficient Gibbs Sampling for Markov Switching GARCH Models

    Full text link
    We develop efficient simulation techniques for Bayesian inference on switching GARCH models. Our contribution to existing literature is manifold. First, we discuss different multi-move sampling techniques for Markov Switching (MS) state space models with particular attention to MS-GARCH models. Our multi-move sampling strategy is based on the Forward Filtering Backward Sampling (FFBS) applied to an approximation of MS-GARCH. Another important contribution is the use of multi-point samplers, such as the Multiple-Try Metropolis (MTM) and the Multiple trial Metropolize Independent Sampler, in combination with FFBS for the MS-GARCH process. In this sense we ex- tend to the MS state space models the work of So [2006] on efficient MTM sampler for continuous state space models. Finally, we suggest to further improve the sampler efficiency by introducing the antithetic sampling of Craiu and Meng [2005] and Craiu and Lemieux [2007] within the FFBS. Our simulation experiments on MS-GARCH model show that our multi-point and multi-move strategies allow the sampler to gain efficiency when compared with single-move Gibbs sampling.Comment: 38 pages, 7 figure

    Interacting Multiple Try Algorithms with Different Proposal Distributions

    Get PDF
    We propose a new class of interacting Markov chain Monte Carlo (MCMC) algorithms designed for increasing the efficiency of a modified multiple-try Metropolis (MTM) algorithm. The extension with respect to the existing MCMC literature is twofold. The sampler proposed extends the basic MTM algorithm by allowing different proposal distributions in the multiple-try generation step. We exploit the structure of the MTM algorithm with different proposal distributions to naturally introduce an interacting MTM mechanism (IMTM) that expands the class of population Monte Carlo methods. We show the validity of the algorithm and discuss the choice of the selection weights and of the different proposals. We provide numerical studies which show that the new algorithm can perform better than the basic MTM algorithm and that the interaction mechanism allows the IMTM to efficiently explore the state space

    Construction of weakly CUD sequences for MCMC sampling

    Full text link
    In Markov chain Monte Carlo (MCMC) sampling considerable thought goes into constructing random transitions. But those transitions are almost always driven by a simulated IID sequence. Recently it has been shown that replacing an IID sequence by a weakly completely uniformly distributed (WCUD) sequence leads to consistent estimation in finite state spaces. Unfortunately, few WCUD sequences are known. This paper gives general methods for proving that a sequence is WCUD, shows that some specific sequences are WCUD, and shows that certain operations on WCUD sequences yield new WCUD sequences. A numerical example on a 42 dimensional continuous Gibbs sampler found that some WCUD inputs sequences produced variance reductions ranging from tens to hundreds for posterior means of the parameters, compared to IID inputs.Comment: Published in at http://dx.doi.org/10.1214/07-EJS162 the Electronic Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Metropolis Sampling

    Full text link
    Monte Carlo (MC) sampling methods are widely applied in Bayesian inference, system simulation and optimization problems. The Markov Chain Monte Carlo (MCMC) algorithms are a well-known class of MC methods which generate a Markov chain with the desired invariant distribution. In this document, we focus on the Metropolis-Hastings (MH) sampler, which can be considered as the atom of the MCMC techniques, introducing the basic notions and different properties. We describe in details all the elements involved in the MH algorithm and the most relevant variants. Several improvements and recent extensions proposed in the literature are also briefly discussed, providing a quick but exhaustive overview of the current Metropolis-based sampling's world.Comment: Wiley StatsRef-Statistics Reference Online, 201

    On the flexibility of the design of Multiple Try Metropolis schemes

    Full text link
    The Multiple Try Metropolis (MTM) method is a generalization of the classical Metropolis-Hastings algorithm in which the next state of the chain is chosen among a set of samples, according to normalized weights. In the literature, several extensions have been proposed. In this work, we show and remark upon the flexibility of the design of MTM-type methods, fulfilling the detailed balance condition. We discuss several possibilities and show different numerical results

    A multiple-try Metropolis-Hastings algorithm with tailored proposals

    Full text link
    We present a new multiple-try Metropolis-Hastings algorithm designed to be especially beneficial when a tailored proposal distribution is available. The algorithm is based on a given acyclic graph GG, where one of the nodes in GG, kk say, contains the current state of the Markov chain and the remaining nodes contain proposed states generated by applying the tailored proposal distribution. The Metropolis-Hastings algorithm alternates between two types of updates. The first update type is using the tailored proposal distribution to generate new states in all nodes in GG except in node kk. The second update type is generating a new value for kk, thereby changing the value of the current state. We evaluate the effectiveness of the proposed scheme in an example with previously defined target and proposal distributions
    corecore