21 research outputs found

    Bernstein - von Mises theorem and misspecified models: a review

    Full text link
    This is a review of asymptotic and non-asymptotic behaviour of Bayesian methods under model specification. In particular we focus on consistency, i.e. convergence of the posterior distribution to the point mass at the best parametric approximation to the true model, and conditions for it to be locally Gaussian around this point. For well specified regular models, variance of the Gaussian approximation coincides with the Fisher information, making Bayesian inference asymptotically efficient. In this review, we discuss how this is affected by model misspecification. We also discuss approaches to adjust Bayesian inference to make it asymptotically efficient under model misspecification

    Fitting latent non-Gaussian models using variational Bayes and Laplace approximations

    Full text link
    Latent Gaussian models (LGMs) are perhaps the most commonly used class of models in statistical applications. Nevertheless, in areas ranging from longitudinal studies in biostatistics to geostatistics, it is easy to find datasets that contain inherently non-Gaussian features, such as sudden jumps or spikes, that adversely affect the inferences and predictions made from an LGM. These datasets require more general latent non-Gaussian models (LnGMs) that can handle these non-Gaussian features automatically. However, fast implementation and easy-to-use software are lacking, which prevent LnGMs from becoming widely applicable. In this paper, we derive variational Bayes algorithms for fast and scalable inference of LnGMs. The approximation leads to an LGM that downweights extreme events in the latent process, reducing their impact and leading to more robust inferences. It can be applied to a wide range of models, such as autoregressive processes for time series, simultaneous autoregressive models for areal data, and spatial Mat\'ern models. To facilitate Bayesian inference, we introduce the ngvb package, where LGMs implemented in R-INLA can be easily extended to LnGMs by adding a single line of code.Comment: 30 pages, 10 figure
    corecore