1,197 research outputs found

    Particle Gibbs with Ancestor Sampling

    Full text link
    Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a novel PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an off-the-shelf class of Markov kernels that can be used to simulate the typically high-dimensional and highly autocorrelated state trajectory in a state-space model. The ancestor sampling procedure enables fast mixing of the PGAS kernel even when using seemingly few particles in the underlying SMC sampler. This is important as it can significantly reduce the computational burden that is typically associated with using SMC. PGAS is conceptually similar to the existing PG with backward simulation (PGBS) procedure. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. This makes PGAS well suited for addressing inference problems not only in state-space models, but also in models with more complex dependencies, such as non-Markovian, Bayesian nonparametric, and general probabilistic graphical models

    An extended space approach for particle Markov chain Monte Carlo methods

    Full text link
    In this paper we consider fully Bayesian inference in general state space models. Existing particle Markov chain Monte Carlo (MCMC) algorithms use an augmented model that takes into account all the variable sampled in a sequential Monte Carlo algorithm. This paper describes an approach that also uses sequential Monte Carlo to construct an approximation to the state space, but generates extra states using MCMC runs at each time point. We construct an augmented model for our extended space with the marginal distribution of the sampled states matching the posterior distribution of the state vector. We show how our method may be combined with particle independent Metropolis-Hastings or particle Gibbs steps to obtain a smoothing algorithm. All the Metropolis acceptance probabilities are identical to those obtained in existing approaches, so there is no extra cost in term of Metropolis-Hastings rejections when using our approach. The number of MCMC iterates at each time point is chosen by the used and our augmented model collapses back to the model in Olsson and Ryden (2011) when the number of MCMC iterations reduces. We show empirically that our approach works well on applied examples and can outperform existing methods.Comment: 35 pages, 2 figures, Typos corrected from Version

    On Scalable Particle Markov Chain Monte Carlo

    Full text link
    Particle Markov Chain Monte Carlo (PMCMC) is a general approach to carry out Bayesian inference in non-linear and non-Gaussian state space models. Our article shows how to scale up PMCMC in terms of the number of observations and parameters by expressing the target density of the PMCMC in terms of the basic uniform or standard normal random numbers, instead of the particles, used in the sequential Monte Carlo algorithm. Parameters that can be drawn efficiently conditional on the particles are generated by particle Gibbs. All the other parameters are drawn by conditioning on the basic uniform or standard normal random variables; e.g. parameters that are highly correlated with the states, or parameters whose generation is expensive when conditioning on the states. The performance of this hybrid sampler is investigated empirically by applying it to univariate and multivariate stochastic volatility models having both a large number of parameters and a large number of latent states and shows that it is much more efficient than competing PMCMC methods. We also show that the proposed hybrid sampler is ergodic
    • …
    corecore