655 research outputs found

    Binary Linear Classification and Feature Selection via Generalized Approximate Message Passing

    Full text link
    For the problem of binary linear classification and feature selection, we propose algorithmic approaches to classifier design based on the generalized approximate message passing (GAMP) algorithm, recently proposed in the context of compressive sensing. We are particularly motivated by problems where the number of features greatly exceeds the number of training examples, but where only a few features suffice for accurate classification. We show that sum-product GAMP can be used to (approximately) minimize the classification error rate and max-sum GAMP can be used to minimize a wide variety of regularized loss functions. Furthermore, we describe an expectation-maximization (EM)-based scheme to learn the associated model parameters online, as an alternative to cross-validation, and we show that GAMP's state-evolution framework can be used to accurately predict the misclassification rate. Finally, we present a detailed numerical study to confirm the accuracy, speed, and flexibility afforded by our GAMP-based approaches to binary linear classification and feature selection

    Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem

    Full text link
    In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the Rectified Gaussian Scale Mixture (R- GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Student- t distributions with a proper choice of the mixing density. We utilize the hierarchical representation induced by the R-GSM prior and develop an evidence maximization framework based on the Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate the hyper-parameters and obtain a point estimate for the solution. We refer to the proposed method as rectified sparse Bayesian learning (R-SBL). We provide four R- SBL variants that offer a range of options for computational complexity and the quality of the E-step computation. These methods include the Markov chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate message passing and a diagonal approximation. Using numerical experiments, we show that the proposed R-SBL method outperforms existing S-NNLS solvers in terms of both signal and support recovery performance, and is also very robust against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin

    Adaptive Algorithm for Sparse Signal Recovery

    Full text link
    Spike and slab priors play a key role in inducing sparsity for sparse signal recovery. The use of such priors results in hard non-convex and mixed integer programming problems. Most of the existing algorithms to solve the optimization problems involve either simplifying assumptions, relaxations or high computational expenses. We propose a new adaptive alternating direction method of multipliers (AADMM) algorithm to directly solve the presented optimization problem. The algorithm is based on the one-to-one mapping property of the support and non-zero element of the signal. At each step of the algorithm, we update the support by either adding an index to it or removing an index from it and use the alternating direction method of multipliers to recover the signal corresponding to the updated support. Experiments on synthetic data and real-world images show that the proposed AADMM algorithm provides superior performance and is computationally cheaper, compared to the recently developed iterative convex refinement (ICR) algorithm

    Fast LASSO based DOA tracking

    Get PDF
    In this paper, we propose a sequential, fast DOA tracking technique using the measurements of a uniform linear sensor array in the far field of a set of narrow band sources. Our approach is based on sparse approximation technique LASSO (Least Absolute Shrincage and Selection Operator), which has recently gained considerable interest for DOA and other estimation problems. Considering the LASSO optimization as a Bayesian estimation, we first define a class of prior distributions suitable for the sparse representation of the model and discuss its relation to the priors over DOAs and waveforms. Inspired by the Kalman filtering method, we introduce a nonlinear sequential filter on this family of distributions. We derive the filter for a simple random walk motion model of the DOAs. The method consists of consecutive implementation of weighted LASSO optimizations using each new measurement and updating the LASSO weights for the next step

    Sparse EEG Source Localization Using Bernoulli Laplacian Priors

    Get PDF
    International audienceSource localization in electroencephalography has received an increasing amount of interest in the last decade. Solving the underlying ill-posed inverse problem usually requires choosing an appropriate regularization. The usual l2 norm has been considered and provides solutions with low computational complexity. However, in several situations, realistic brain activity is believed to be focused in a few focal areas. In these cases, the l2 norm is known to overestimate the activated spatial areas. One solution to this problem is to promote sparse solutions for instance based on the l1 norm that are easy to handle with optimization techniques. In this paper, we consider the use of an l0 + l1 norm to enforce sparse source activity (by ensuring the solution has few nonzero elements) while regularizing the nonzero amplitudes of the solution. More precisely, the l0 pseudonorm handles the position of the non zero elements while the l1 norm constrains the values of their amplitudes. We use a Bernoulli–Laplace prior to introduce this combined l0 + l1 norm in a Bayesian framework. The proposed Bayesian model is shown to favor sparsity while jointly estimating the model hyperparameters using a Markov chain Monte Carlo sampling technique. We apply the model to both simulated and real EEG data, showing that the proposed method provides better results than the l2 and l1 norms regularizations in the presence of pointwise sources. A comparison with a recent method based on multiple sparse priors is also conducted

    A hierarchical sparsity-smoothness Bayesian model for ℓ0 + ℓ1 + ℓ2 regularization

    Get PDF
    International audienceSparse signal/image recovery is a challenging topic that has captured a great interest during the last decades. To address the ill-posedness of the related inverse problem, regularization is often essential by using appropriate priors that promote the sparsity of the target signal/image. In this context, ℓ0 + ℓ1 regularization has been widely investigated. In this paper, we introduce a new prior accounting simultaneously for both sparsity and smoothness of restored signals. We use a Bernoulli-generalized Gauss-Laplace distribution to perform ℓ0 + ℓ1 + ℓ2 regularization in a Bayesian framework. Our results show the potential of the proposed approach especially in restoring the non-zero coefficients of the signal/image of interest
    • 

    corecore