5,339 research outputs found

    Approximating Probability Densities by Iterated Laplace Approximations

    Full text link
    The Laplace approximation is an old, but frequently used method to approximate integrals for Bayesian calculations. In this paper we develop an extension of the Laplace approximation, by applying it iteratively to the residual, i.e., the difference between the current approximation and the true function. The final approximation is thus a linear combination of multivariate normal densities, where the coefficients are chosen to achieve a good fit to the target distribution. We illustrate on real and artificial examples that the proposed procedure is a computationally efficient alternative to current approaches for approximation of multivariate probability densities. The R-package iterLap implementing the methods described in this article is available from the CRAN servers.Comment: to appear in Journal of Computational and Graphical Statistics, http://pubs.amstat.org/loi/jcg

    Conjugate Bayes for probit regression via unified skew-normal distributions

    Full text link
    Regression models for dichotomous data are ubiquitous in statistics. Besides being useful for inference on binary responses, these methods serve also as building blocks in more complex formulations, such as density regression, nonparametric classification and graphical models. Within the Bayesian framework, inference proceeds by updating the priors for the coefficients, typically set to be Gaussians, with the likelihood induced by probit or logit regressions for the responses. In this updating, the apparent absence of a tractable posterior has motivated a variety of computational methods, including Markov Chain Monte Carlo routines and algorithms which approximate the posterior. Despite being routinely implemented, Markov Chain Monte Carlo strategies face mixing or time-inefficiency issues in large p and small n studies, whereas approximate routines fail to capture the skewness typically observed in the posterior. This article proves that the posterior distribution for the probit coefficients has a unified skew-normal kernel, under Gaussian priors. Such a novel result allows efficient Bayesian inference for a wide class of applications, especially in large p and small-to-moderate n studies where state-of-the-art computational methods face notable issues. These advances are outlined in a genetic study, and further motivate the development of a wider class of conjugate priors for probit models along with methods to obtain independent and identically distributed samples from the unified skew-normal posterior

    The joint projected normal and skew-normal: a distribution for poly-cylindrical data

    Full text link
    The contribution of this work is the introduction of a multivariate circular-linear (or poly- cylindrical) distribution obtained by combining the projected and the skew-normal. We show the flexibility of our proposal, its property of closure under marginalization and how to quantify multivariate dependence. Due to a non-identifiability issue that our proposal inherits from the projected normal, a compu- tational problem arises. We overcome it in a Bayesian framework, adding suitable latent variables and showing that posterior samples can be obtained with a post-processing of the estimation algo- rithm output. Under specific prior choices, this approach enables us to implement a Markov chain Monte Carlo algorithm relying only on Gibbs steps, where the updates of the parameters are done as if we were working with a multivariate normal likelihood. The proposed approach can be also used with the projected normal. As a proof of concept, on simulated examples we show the ability of our algorithm in recovering the parameters values and to solve the identification problem. Then the proposal is used in a real data example, where the turning-angles (circular variables) and the logarithm of the step-lengths (linear variables) of four zebras are jointly modelled
    • …
    corecore