966 research outputs found

    Nearly optimal Bayesian Shrinkage for High Dimensional Regression

    Full text link
    During the past decade, shrinkage priors have received much attention in Bayesian analysis of high-dimensional data. In this paper, we study the problem for high-dimensional linear regression models. We show that if the shrinkage prior has a heavy and flat tail, and allocates a sufficiently large probability mass in a very small neighborhood of zero, then its posterior properties are as good as those of the spike-and-slab prior. While enjoying its efficiency in Bayesian computation, the shrinkage prior can lead to a nearly optimal contraction rate and selection consistency as the spike-and-slab prior. Our numerical results show that under posterior consistency, Bayesian methods can yield much better results in variable selection than the regularization methods, such as Lasso and SCAD. We also establish a Bernstein von-Mises type results comparable to Castillo et al (2015), this result leads to a convenient way to quantify uncertainties of the regression coefficient estimates, which has been beyond the ability of regularization methods

    Bayesian Structure Learning for Markov Random Fields with a Spike and Slab Prior

    Get PDF
    In recent years a number of methods have been developed for automatically learning the (sparse) connectivity structure of Markov Random Fields. These methods are mostly based on L1-regularized optimization which has a number of disadvantages such as the inability to assess model uncertainty and expensive cross-validation to find the optimal regularization parameter. Moreover, the model's predictive performance may degrade dramatically with a suboptimal value of the regularization parameter (which is sometimes desirable to induce sparseness). We propose a fully Bayesian approach based on a "spike and slab" prior (similar to L0 regularization) that does not suffer from these shortcomings. We develop an approximate MCMC method combining Langevin dynamics and reversible jump MCMC to conduct inference in this model. Experiments show that the proposed model learns a good combination of the structure and parameter values without the need for separate hyper-parameter tuning. Moreover, the model's predictive performance is much more robust than L1-based methods with hyper-parameter settings that induce highly sparse model structures.Comment: Accepted in the Conference on Uncertainty in Artificial Intelligence (UAI), 201

    Asymptotic Properties for Bayesian Neural Network in Besov Space

    Full text link
    Neural networks have shown great predictive power when dealing with various unstructured data such as images and natural languages. The Bayesian neural network captures the uncertainty of prediction by putting a prior distribution for the parameter of the model and computing the posterior distribution. In this paper, we show that the Bayesian neural network using spike-and-slab prior has consistency with nearly minimax convergence rate when the true regression function is in the Besov space. Even when the smoothness of the regression function is unknown the same posterior convergence rate holds and thus the spike-and-slab prior is adaptive to the smoothness of the regression function. We also consider the shrinkage prior, which is more feasible than other priors, and show that it has the same convergence rate. In other words, we propose a practical Bayesian neural network with guaranteed asymptotic properties

    Device Detection and Channel Estimation in MTC with Correlated Activity Pattern

    Full text link
    This paper provides a solution for the activity detection and channel estimation problem in grant-free access with correlated device activity patterns. In particular, we consider a machine-type communications (MTC) network operating in event-triggered traffic mode, where the devices are distributed over clusters with an activity behaviour that exhibits both intra-cluster and inner-cluster sparsity patterns. Furthermore, to model the network's intra-cluster and inner-cluster sparsity, we propose a structured sparsity-inducing spike-and-slab prior which provides a flexible approach to encode the prior information about the correlated sparse activity pattern. Furthermore, we drive a Bayesian inference scheme based on the expectation propagation (EP) framework to solve the JUICE problem. Numerical results highlight the significant gains obtained by the proposed structured sparsity-inducing spike-and-slab prior in terms of both user identification accuracy and channel estimation performance.Comment: This is the extended abstract for the paper accepted for presentation at Asilomar 202

    A Contribution to Variable Selection for the Cox Proportional Hazards Model with High-Dimensional Predictors

    Get PDF
    The aim of this thesis is to develop a variable selection framework with the spike-and-slab prior distribution via the hazard function of the Cox model. Specifically, we consider the transformation of the score and information functions for the partial likelihood function evaluated at the given data from the parameter space into thespace generated by the logarithm of the hazard ratio. Thereby, we reduce the nonlinear complexity of the estimation equation for the Cox model and allow the utilization of a wider variety of stable variable selection methods. Then, we use a stochastic variable search Gibbs sampling approach via the spike-and-slab prior distribution to obtain the sparsity structure of the covariates associated with the survival outcome. To demonstrate the efficiency and accuracy of the proposed method in both low-dimensional and high-dimensional settings, we conduct numerical simulations to evaluate the finite-sample performance of the proposed method. Finally, we apply this novel framework within biological contexts on real world data sets such as primary biliary cirrhosis and lung adenocarcinoma data to find important variables associated with decreased survival in subjects with the aforementioned disease
    corecore