6 research outputs found

    Controlled Sequential Monte Carlo

    Full text link
    Sequential Monte Carlo methods, also known as particle methods, are a popular set of techniques for approximating high-dimensional probability distributions and their normalizing constants. These methods have found numerous applications in statistics and related fields; e.g. for inference in non-linear non-Gaussian state space models, and in complex static models. Like many Monte Carlo sampling schemes, they rely on proposal distributions which crucially impact their performance. We introduce here a class of controlled sequential Monte Carlo algorithms, where the proposal distributions are determined by approximating the solution to an associated optimal control problem using an iterative scheme. This method builds upon a number of existing algorithms in econometrics, physics, and statistics for inference in state space models, and generalizes these methods so as to accommodate complex static models. We provide a theoretical analysis concerning the fluctuation and stability of this methodology that also provides insight into the properties of related algorithms. We demonstrate significant gains over state-of-the-art methods at a fixed computational complexity on a variety of applications

    Multilevel linear models, Gibbs samplers and multigrid decompositions (with Discussion)

    Get PDF
    We study the convergence properties of the Gibbs Sampler in the context of posterior distributions arising from Bayesian analysis of conditionally Gaussian hierarchical models. We develop a multigrid approach to derive analytic expressions for the convergence rates of the algorithm for various widely used model structures, including nested and crossed random effects. Our results apply to multilevel models with an arbitrary number of layers in the hierarchy, while most previous work was limited to the two-level nested case. The theoretical results provide explicit and easy-to-implement guidelines to optimize practical implementations of the Gibbs Sampler, such as indications on which parametrization to choose (e.g. centred and non-centred), which constraint to impose to guarantee statistical identifiability, and which parameters to monitor in the diagnostic process. Simulations suggest that the results are informative also in the context of non-Gaussian distributions and more general MCMC schemes, such as gradient-based ones

    Controlled Sequential Monte Carlo

    Full text link
    Sequential Monte Carlo methods, also known as particle methods, are a popular set of techniques to approximate high-dimensional probability distributions and their normalizing constants. They have found numerous applications in statistics and related fields as they can be applied to perform state estimation for non-linear non-Gaussian state space models and Bayesian inference for complex static models. Like many Monte Carlo sampling schemes, they rely on proposal distributions which have a crucial impact on their performance. We introduce here a class of controlled sequential Monte Carlo algorithms, where the proposal distributions are determined by approximating the solution to an associated optimal control problem using an iterative scheme. We provide theoretical analysis of our proposed methodology and demonstrate significant gains over state-of-the-art methods at a fixed computational complexity on a variety of applications

    Multilevel linear models, Gibbs Samplers and multigrid decompositions (with discussion)

    Get PDF
    We study the convergence properties of the Gibbs Sampler in the context of posterior distributions arising from Bayesian analysis of conditionally Gaussian hierarchical models. We develop a multigrid approach to derive analytic expressions for the convergence rates of the algorithm for various widely used model structures, including nested and crossed random effects. Our results apply to multilevel models with an arbitrary number of layers in the hierarchy, while most previous work was limited to the two-level nested case. The theoretical results provide explicit and easy-to-implement guidelines to optimize practical implementations of the Gibbs Sampler, such as indications on which parametrization to choose (e.g. centred and non-centred), which constraint to impose to guarantee statistical identifiability, and which parameters to monitor in the diagnostic process. Simulations suggest that the results are informative also in the context of non-Gaussian distributions and more general MCMC schemes, such as gradient-based ones

    Perfect and imperfect simulations in stochastic geometry

    Get PDF
    This thesis presents new developments and applications of simulation methods in stochastic geometry. Simulation is a useful tool for the statistical analysis of spatial point patterns. We use simulation to investigate the power of tests based on the J-function, a new measure of spatial interaction in point patterns. The power of tests based on J is compared to the power of tests based on alternative measures of spatial interaction. Many models in stochastic geometry can only be sampled using Markov chain Monte Carlo methods. We present and extend a new generation of Markov chain Monte Carlo methods, the perfect simulation algorithms. In contrast to conventional Markov chain Monte Carlo methods perfect simulation methods are able to check whether the sampled Markov chain has reached equilibrium yet, thus ensuring that the exact equilibrium distribution is sampled. There are two types of perfect simulation algorithms. Coupling from the Past and Fill’s interruptible algorithm. We present Coupling from the Past in the most general form available and provide a classification of Coupling from the Past algorithms. Coupling from the Past is then extended to produce exact samples of a Boolean model which is conditioned to cover a set of locations with grains. Finally we discuss Fill’s interruptible algorithm and show how to extend the original algorithm to continuous distributions by applying it to a point process example
    corecore