913 research outputs found
Bayesian compressive sensing framework for spectrum reconstruction in Rayleigh fading channels
Compressive sensing (CS) is a novel digital signal processing technique that has found great interest in
many applications including communication theory and wireless communications. In wireless communications, CS
is particularly suitable for its application in the area of spectrum sensing for cognitive radios, where the complete
spectrum under observation, with many spectral holes, can be modeled as a sparse wide-band signal in the frequency
domain. Considering the initial works performed to exploit the benefits of Bayesian CS in spectrum sensing, the fading
characteristic of wireless communications has not been considered yet to a great extent, although it is an inherent feature
for all sorts of wireless communications and it must be considered for the design of any practically viable wireless system.
In this paper, we extend the Bayesian CS framework for the recovery of a sparse signal, whose nonzero coefficients follow
a Rayleigh distribution. It is then demonstrated via simulations that mean square error significantly improves when
appropriate prior distribution is used for the faded signal coefficients and thus, in turns, the spectrum reconstruction
improves. Different parameters of the system model, e.g., sparsity level and number of measurements, are then varied
to show the consistency of the results for different cases
Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing
In this work the dynamic compressive sensing (CS) problem of recovering
sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear
measurements is explored from a Bayesian perspective. While there has been a
handful of previously proposed Bayesian dynamic CS algorithms in the
literature, the ability to perform inference on high-dimensional problems in a
computationally efficient manner remains elusive. In response, we propose a
probabilistic dynamic CS signal model that captures both amplitude and support
correlation structure, and describe an approximate message passing algorithm
that performs soft signal estimation and support detection with a computational
complexity that is linear in all problem dimensions. The algorithm, DCS-AMP,
can perform either causal filtering or non-causal smoothing, and is capable of
learning model parameters adaptively from the data through an
expectation-maximization learning procedure. We provide numerical evidence that
DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety
of operating conditions. We further describe the result of applying DCS-AMP to
two real dynamic CS datasets, as well as a frequency estimation task, to
bolster our claim that DCS-AMP is capable of offering state-of-the-art
performance and speed on real-world high-dimensional problems.Comment: 32 pages, 7 figure
Bayesian Compressed Regression
As an alternative to variable selection or shrinkage in high dimensional
regression, we propose to randomly compress the predictors prior to analysis.
This dramatically reduces storage and computational bottlenecks, performing
well when the predictors can be projected to a low dimensional linear subspace
with minimal loss of information about the response. As opposed to existing
Bayesian dimensionality reduction approaches, the exact posterior distribution
conditional on the compressed data is available analytically, speeding up
computation by many orders of magnitude while also bypassing robustness issues
due to convergence and mixing problems with MCMC. Model averaging is used to
reduce sensitivity to the random projection matrix, while accommodating
uncertainty in the subspace dimension. Strong theoretical support is provided
for the approach by showing near parametric convergence rates for the
predictive density in the large p small n asymptotic paradigm. Practical
performance relative to competitors is illustrated in simulations and real data
applications.Comment: 29 pages, 4 figure
Sparse Estimation using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models
In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been
used to model sparsity-inducing priors that realize a class of concave penalty
functions for the regression task in real-valued signal models. Motivated by
the relative scarcity of formal tools for SBL in complex-valued models, this
paper proposes a GSM model - the Bessel K model - that induces concave penalty
functions for the estimation of complex sparse signals. The properties of the
Bessel K model are analyzed when it is applied to Type I and Type II
estimation. This analysis reveals that, by tuning the parameters of the mixing
pdf different penalty functions are invoked depending on the estimation type
used, the value of the noise variance, and whether real or complex signals are
estimated. Using the Bessel K model, we derive a sparse estimator based on a
modification of the expectation-maximization algorithm formulated for Type II
estimation. The estimator includes as a special instance the algorithms
proposed by Tipping and Faul [1] and by Babacan et al. [2]. Numerical results
show the superiority of the proposed estimator over these state-of-the-art
estimators in terms of convergence speed, sparseness, reconstruction error, and
robustness in low and medium signal-to-noise ratio regimes.Comment: The paper provides a new comprehensive analysis of the theoretical
foundations of the proposed estimators. Minor modification of the titl
Fast Stochastic Hierarchical Bayesian MAP for Tomographic Imaging
Any image recovery algorithm attempts to achieve the highest quality
reconstruction in a timely manner. The former can be achieved in several ways,
among which are by incorporating Bayesian priors that exploit natural image
tendencies to cue in on relevant phenomena. The Hierarchical Bayesian MAP
(HB-MAP) is one such approach which is known to produce compelling results
albeit at a substantial computational cost. We look to provide further analysis
and insights into what makes the HB-MAP work. While retaining the proficient
nature of HB-MAP's Type-I estimation, we propose a stochastic
approximation-based approach to Type-II estimation. The resulting algorithm,
fast stochastic HB-MAP (fsHBMAP), takes dramatically fewer operations while
retaining high reconstruction quality. We employ our fsHBMAP scheme towards the
problem of tomographic imaging and demonstrate that fsHBMAP furnishes promising
results when compared to many competing methods.Comment: 5 Pages, 4 Figures, Conference (Accepted to Asilomar 2017
Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem
In this paper, we develop a Bayesian evidence maximization framework to solve
the sparse non-negative least squares (S-NNLS) problem. We introduce a family
of probability densities referred to as the Rectified Gaussian Scale Mixture
(R- GSM) to model the sparsity enforcing prior distribution for the solution.
The R-GSM prior encompasses a variety of heavy-tailed densities such as the
rectified Laplacian and rectified Student- t distributions with a proper choice
of the mixing density. We utilize the hierarchical representation induced by
the R-GSM prior and develop an evidence maximization framework based on the
Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate
the hyper-parameters and obtain a point estimate for the solution. We refer to
the proposed method as rectified sparse Bayesian learning (R-SBL). We provide
four R- SBL variants that offer a range of options for computational complexity
and the quality of the E-step computation. These methods include the Markov
chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate
message passing and a diagonal approximation. Using numerical experiments, we
show that the proposed R-SBL method outperforms existing S-NNLS solvers in
terms of both signal and support recovery performance, and is also very robust
against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin
Compressed sensing reconstruction using Expectation Propagation
Many interesting problems in fields ranging from telecommunications to
computational biology can be formalized in terms of large underdetermined
systems of linear equations with additional constraints or regularizers. One of
the most studied ones, the Compressed Sensing problem (CS), consists in finding
the solution with the smallest number of non-zero components of a given system
of linear equations for known
measurement vector and sensing matrix . Here, we
will address the compressed sensing problem within a Bayesian inference
framework where the sparsity constraint is remapped into a singular prior
distribution (called Spike-and-Slab or Bernoulli-Gauss). Solution to the
problem is attempted through the computation of marginal distributions via
Expectation Propagation (EP), an iterative computational scheme originally
developed in Statistical Physics. We will show that this strategy is
comparatively more accurate than the alternatives in solving instances of CS
generated from statistically correlated measurement matrices. For computational
strategies based on the Bayesian framework such as variants of Belief
Propagation, this is to be expected, as they implicitly rely on the hypothesis
of statistical independence among the entries of the sensing matrix. Perhaps
surprisingly, the method outperforms uniformly also all the other
state-of-the-art methods in our tests.Comment: 20 pages, 6 figure
- …