7 research outputs found

    Delayed acceptance ABC-SMC

    Get PDF
    Approximate Bayesian computation (ABC) is now an established technique for statistical inference used in cases where the likelihood function is computationally expensive or not available. It relies on the use of a~model that is specified in the form of a~simulator, and approximates the likelihood at a~parameter value θ\theta by simulating auxiliary data sets xx and evaluating the distance of xx from the true data yy. However, ABC is not computationally feasible in cases where using the simulator for each θ\theta is very expensive. This paper investigates this situation in cases where a~cheap, but approximate, simulator is available. The approach is to employ delayed acceptance Markov chain Monte Carlo (MCMC) within an ABC sequential Monte Carlo (SMC) sampler in order to, in a~first stage of the kernel, use the cheap simulator to rule out parts of the parameter space that are not worth exploring, so that the ``true'' simulator is only run (in the second stage of the kernel) where there is a~reasonable chance of accepting proposed values of θ\theta. We show that this approach can be used quite automatically, with few tuning parameters. Applications to stochastic differential equation models and latent doubly intractable distributions are presented

    Approximate Bayesian Computations to fit and compare insurance loss models

    Full text link
    Approximate Bayesian Computation (ABC) is a statistical learning technique to calibrate and select models by comparing observed data to simulated data. This technique bypasses the use of the likelihood and requires only the ability to generate synthetic data from the models of interest. We apply ABC to fit and compare insurance loss models using aggregated data. We present along the way how to use ABC for the more common claim counts and claim sizes data. A state-of-the-art ABC implementation in Python is proposed. It uses sequential Monte Carlo to sample from the posterior distribution and the Wasserstein distance to compare the observed and synthetic data

    Ensemble MCMC: Accelerating Pseudo-Marginal MCMC for State Space Models using the Ensemble Kalman Filter

    Get PDF
    Particle Markov chain Monte Carlo (pMCMC) is now a popular method for performing Bayesian statistical inference on challenging state space models (SSMs) with unknown static parameters. It uses a particle filter (PF) at each iteration of an MCMC algorithm to unbiasedly estimate the likelihood for a given static parameter value. However, pMCMC can be computationally intensive when a large number of particles in the PF is required, such as when the data are highly informative, the model is misspecified and/or the time series is long. In this paper we exploit the ensemble Kalman filter (EnKF) developed in the data assimilation literature to speed up pMCMC. We replace the unbiased PF likelihood with the biased EnKF likelihood estimate within MCMC to sample over the space of the static parameter. On a wide class of different non-linear SSM models, we demonstrate that our extended ensemble MCMC (eMCMC) methods can significantly reduce the computational cost whilst maintaining reasonable accuracy. We also propose several extensions of the vanilla eMCMC algorithm to further improve computational efficiency. Computer code to implement our methods on all the examples can be downloaded from https://github.com/cdrovandi/Ensemble-MCMC

    ABC Samplers

    Full text link
    This Chapter, "ABC Samplers", is to appear in the forthcoming Handbook of Approximate Bayesian Computation (2018). It details the main ideas and algorithms used to sample from the ABC approximation to the posterior distribution, including methods based on rejection/importance sampling, MCMC and sequential Monte Carlo
    corecore