2,876 research outputs found
Sparse signal recovery using a Bernoulli generalized Gaussian prior
International audienceBayesian sparse signal recovery has been widely investigated during the last decade due to its ability to automatically estimate regularization parameters. Prior based on mixtures of Bernoulli and continuous distributions have recently been used in a number of recent works to model the target signals , often leading to complicated posteriors. Inference is therefore usually performed using Markov chain Monte Carlo algorithms. In this paper, a Bernoulli-generalized Gaussian distribution is used in a sparse Bayesian regularization framework to promote a two-level flexible sparsity. Since the resulting conditional posterior has a non-differentiable energy function , the inference is conducted using the recently proposed non-smooth Hamiltonian Monte Carlo algorithm. Promising results obtained with synthetic data show the efficiency of the proposed regularization scheme
Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem
In this paper, we develop a Bayesian evidence maximization framework to solve
the sparse non-negative least squares (S-NNLS) problem. We introduce a family
of probability densities referred to as the Rectified Gaussian Scale Mixture
(R- GSM) to model the sparsity enforcing prior distribution for the solution.
The R-GSM prior encompasses a variety of heavy-tailed densities such as the
rectified Laplacian and rectified Student- t distributions with a proper choice
of the mixing density. We utilize the hierarchical representation induced by
the R-GSM prior and develop an evidence maximization framework based on the
Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate
the hyper-parameters and obtain a point estimate for the solution. We refer to
the proposed method as rectified sparse Bayesian learning (R-SBL). We provide
four R- SBL variants that offer a range of options for computational complexity
and the quality of the E-step computation. These methods include the Markov
chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate
message passing and a diagonal approximation. Using numerical experiments, we
show that the proposed R-SBL method outperforms existing S-NNLS solvers in
terms of both signal and support recovery performance, and is also very robust
against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin
Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing
In this work the dynamic compressive sensing (CS) problem of recovering
sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear
measurements is explored from a Bayesian perspective. While there has been a
handful of previously proposed Bayesian dynamic CS algorithms in the
literature, the ability to perform inference on high-dimensional problems in a
computationally efficient manner remains elusive. In response, we propose a
probabilistic dynamic CS signal model that captures both amplitude and support
correlation structure, and describe an approximate message passing algorithm
that performs soft signal estimation and support detection with a computational
complexity that is linear in all problem dimensions. The algorithm, DCS-AMP,
can perform either causal filtering or non-causal smoothing, and is capable of
learning model parameters adaptively from the data through an
expectation-maximization learning procedure. We provide numerical evidence that
DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety
of operating conditions. We further describe the result of applying DCS-AMP to
two real dynamic CS datasets, as well as a frequency estimation task, to
bolster our claim that DCS-AMP is capable of offering state-of-the-art
performance and speed on real-world high-dimensional problems.Comment: 32 pages, 7 figure
Binary Linear Classification and Feature Selection via Generalized Approximate Message Passing
For the problem of binary linear classification and feature selection, we
propose algorithmic approaches to classifier design based on the generalized
approximate message passing (GAMP) algorithm, recently proposed in the context
of compressive sensing. We are particularly motivated by problems where the
number of features greatly exceeds the number of training examples, but where
only a few features suffice for accurate classification. We show that
sum-product GAMP can be used to (approximately) minimize the classification
error rate and max-sum GAMP can be used to minimize a wide variety of
regularized loss functions. Furthermore, we describe an
expectation-maximization (EM)-based scheme to learn the associated model
parameters online, as an alternative to cross-validation, and we show that
GAMP's state-evolution framework can be used to accurately predict the
misclassification rate. Finally, we present a detailed numerical study to
confirm the accuracy, speed, and flexibility afforded by our GAMP-based
approaches to binary linear classification and feature selection
- …