8 research outputs found

    Estimating DSGE Models using Multilevel Sequential Monte Carlo in Approximate Bayesian Computation

    Get PDF
    21-25Dynamic Stochastic General Equilibrium (DSGE) models allow for probabilistic estimations with the aim of formulating macroeconomic policies and monitoring them. In this study, we propose to apply the Sequential Monte Carlo Multilevel algorithm and Approximate Bayesian Computation (MLSMC-ABC) to increase the robustness of DSGE models built for small samples and with irregular data. Our results indicate that MLSMC-ABC improves the estimation of these models in two aspects. Firstly, the accuracy levels of the existing models are increased, and secondly, the cost of the resources used is reduced due to the need for shorter execution time

    Estimating DSGE Models using Multilevel Sequential Monte Carlo in Approximate Bayesian Computation

    Get PDF
    Dynamic Stochastic General Equilibrium (DSGE) models allow for probabilistic estimations with the aim of formulating macroeconomic policies and monitoring them. In this study, we propose to apply the Sequential Monte Carlo Multilevel algorithm and Approximate Bayesian Computation (MLSMC-ABC) to increase the robustness of DSGE models built for small samples and with irregular data. Our results indicate that MLSMC-ABC improves the estimation of these models in two aspects. Firstly, the accuracy levels of the existing models are increased, and secondly, the cost of the resources used is reduced due to the need for shorter execution time

    ABC Samplers

    Full text link
    This Chapter, "ABC Samplers", is to appear in the forthcoming Handbook of Approximate Bayesian Computation (2018). It details the main ideas and algorithms used to sample from the ABC approximation to the posterior distribution, including methods based on rejection/importance sampling, MCMC and sequential Monte Carlo

    Rapid Bayesian inference for expensive stochastic models

    Full text link
    Almost all fields of science rely upon statistical inference to estimate unknown parameters in theoretical and computational models. While the performance of modern computer hardware continues to grow, the computational requirements for the simulation of models are growing even faster. This is largely due to the increase in model complexity, often including stochastic dynamics, that is necessary to describe and characterize phenomena observed using modern, high resolution, experimental techniques. Such models are rarely analytically tractable, meaning that extremely large numbers of stochastic simulations are required for parameter inference. In such cases, parameter inference can be practically impossible. In this work, we present new computational Bayesian techniques that accelerate inference for expensive stochastic models by using computationally inexpensive approximations to inform feasible regions in parameter space, and through learning transforms that adjust the biased approximate inferences to closer represent the correct inferences under the expensive stochastic model. Using topical examples from ecology and cell biology, we demonstrate a speed improvement of an order of magnitude without any loss in accuracy. This represents a substantial improvement over current state-of-the-art methods for Bayesian computations when appropriate model approximations are available

    Multilevel rejection sampling for approximate Bayesian computation

    No full text
    Likelihood-free methods, such as approximate Bayesian computation, are powerful tools for practical inference problems with intractable likelihood functions. Markov chain Monte Carlo and sequential Monte Carlo variants of approximate Bayesian computation can be effective techniques for sampling posterior distributions in an approximate Bayesian computation setting. However, without careful consideration of convergence criteria and selection of proposal kernels, such methods can lead to very biased inference or computationally inefficient sampling. In contrast, rejection sampling for approximate Bayesian computation, despite being computationally intensive, results in independent, identically distributed samples from the approximated posterior. An alternative method is proposed for the acceleration of likelihood-free Bayesian inference that applies multilevel Monte Carlo variance reduction techniques directly to rejection sampling. The resulting method retains the accuracy advantages of rejection sampling while significantly improving the computational efficiency
    corecore