10 research outputs found

    Probabilistic models for structured sparsity

    Get PDF

    Structured Sparse Modelling with Hierarchical GP

    Get PDF
    In this paper a new Bayesian model for sparse linear regression with a spatio-temporal structure is proposed. It incorporates the structural assumptions based on a hierarchical Gaussian process prior for spike and slab coefficients. We design an inference algorithm based on Expectation Propagation and evaluate the model over the real data.Comment: SPARS 201

    CATVI: conditional and adaptively truncated variational inference for hierarchical Bayesian nonparametric models

    Get PDF
    Current variational inference methods for hierarchical Bayesian nonparametric models can neither characterize the correlation struc- ture among latent variables due to the mean- eld setting, nor infer the true posterior dimension because of the universal trunca- tion. To overcome these limitations, we pro- pose the conditional and adaptively trun- cated variational inference method (CATVI) by maximizing the nonparametric evidence lower bound and integrating Monte Carlo into the variational inference framework. CATVI enjoys several advantages over tra- ditional methods, including a smaller diver- gence between variational and true posteri- ors, reduced risk of undertting or overt- ting, and improved prediction accuracy. Em- pirical studies on three large datasets re- veal that CATVI applied in Bayesian non- parametric topic models substantially out- performs competing models, providing lower perplexity and clearer topic-words clustering

    Horseshoe priors for edge-preserving linear Bayesian inversion

    Full text link
    In many large-scale inverse problems, such as computed tomography and image deblurring, characterization of sharp edges in the solution is desired. Within the Bayesian approach to inverse problems, edge-preservation is often achieved using Markov random field priors based on heavy-tailed distributions. Another strategy, popular in statistics, is the application of hierarchical shrinkage priors. An advantage of this formulation lies in expressing the prior as a conditionally Gaussian distribution depending of global and local hyperparameters which are endowed with heavy-tailed hyperpriors. In this work, we revisit the shrinkage horseshoe prior and introduce its formulation for edge-preserving settings. We discuss a sampling framework based on the Gibbs sampler to solve the resulting hierarchical formulation of the Bayesian inverse problem. In particular, one of the conditional distributions is high-dimensional Gaussian, and the rest are derived in closed form by using a scale mixture representation of the heavy-tailed hyperpriors. Applications from imaging science show that our computational procedure is able to compute sharp edge-preserving posterior point estimates with reduced uncertainty

    Practical Hilbert space approximate Bayesian Gaussian processes for probabilistic programming

    Full text link
    Gaussian processes are powerful non-parametric probabilistic models for stochastic functions. However they entail a complexity that is computationally intractable when the number of observations is large, especially when estimated with fully Bayesian methods such as Markov chain Monte Carlo. In this paper, we focus on a novel approach for low-rank approximate Bayesian Gaussian processes, based on a basis function approximation via Laplace eigenfunctions for stationary covariance functions. The main contribution of this paper is a detailed analysis of the performance and practical implementation of the method in relation to key factors such as the number of basis functions, domain of the prediction space, and smoothness of the latent function. We provide intuitive visualizations and recommendations for choosing the values of these factors, which make it easier for users to improve approximation accuracy and computational performance. We also propose diagnostics for checking that the number of basis functions and the domain of the prediction space are adequate given the data. The proposed approach is simple and exhibits an attractive computational complexity due to its linear structure, and it is easy to implement in probabilistic programming frameworks. Several illustrative examples of the performance and applicability of the method in the probabilistic programming language Stan are presented together with the underlying Stan model code.Comment: 33 pages, 19 figure

    Bayesian inference for spatio-temporal spike-and-slab priors

    Get PDF
    In this work, we address the problem of solving a series of underdetermined linear inverse problemblems subject to a sparsity constraint. We generalize the spike-and-slab prior distribution to encode a priori correlation of the support of the solution in both space and time by imposing a transformed Gaussian process on the spike-and-slab probabilities. An expectation propagation (EP) algorithm for posterior inference under the proposed model is derived. For large scale problems, the standard EP algorithm can be prohibitively slow. We therefore introduce three different approximation schemes to reduce the computational complexity. Finally, we demonstrate the proposed model using numerical experiments based on both synthetic and real data sets.Peer reviewe
    corecore