4,619 research outputs found
Bayesian orthogonal component analysis for sparse representation
This paper addresses the problem of identifying a lower dimensional space
where observed data can be sparsely represented. This under-complete dictionary
learning task can be formulated as a blind separation problem of sparse sources
linearly mixed with an unknown orthogonal mixing matrix. This issue is
formulated in a Bayesian framework. First, the unknown sparse sources are
modeled as Bernoulli-Gaussian processes. To promote sparsity, a weighted
mixture of an atom at zero and a Gaussian distribution is proposed as prior
distribution for the unobserved sources. A non-informative prior distribution
defined on an appropriate Stiefel manifold is elected for the mixing matrix.
The Bayesian inference on the unknown parameters is conducted using a Markov
chain Monte Carlo (MCMC) method. A partially collapsed Gibbs sampler is
designed to generate samples asymptotically distributed according to the joint
posterior distribution of the unknown model parameters and hyperparameters.
These samples are then used to approximate the joint maximum a posteriori
estimator of the sources and mixing matrix. Simulations conducted on synthetic
data are reported to illustrate the performance of the method for recovering
sparse representations. An application to sparse coding on under-complete
dictionary is finally investigated.Comment: Revised version. Accepted to IEEE Trans. Signal Processin
Sparse Estimation using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models
In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been
used to model sparsity-inducing priors that realize a class of concave penalty
functions for the regression task in real-valued signal models. Motivated by
the relative scarcity of formal tools for SBL in complex-valued models, this
paper proposes a GSM model - the Bessel K model - that induces concave penalty
functions for the estimation of complex sparse signals. The properties of the
Bessel K model are analyzed when it is applied to Type I and Type II
estimation. This analysis reveals that, by tuning the parameters of the mixing
pdf different penalty functions are invoked depending on the estimation type
used, the value of the noise variance, and whether real or complex signals are
estimated. Using the Bessel K model, we derive a sparse estimator based on a
modification of the expectation-maximization algorithm formulated for Type II
estimation. The estimator includes as a special instance the algorithms
proposed by Tipping and Faul [1] and by Babacan et al. [2]. Numerical results
show the superiority of the proposed estimator over these state-of-the-art
estimators in terms of convergence speed, sparseness, reconstruction error, and
robustness in low and medium signal-to-noise ratio regimes.Comment: The paper provides a new comprehensive analysis of the theoretical
foundations of the proposed estimators. Minor modification of the titl
Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem
In this paper, we develop a Bayesian evidence maximization framework to solve
the sparse non-negative least squares (S-NNLS) problem. We introduce a family
of probability densities referred to as the Rectified Gaussian Scale Mixture
(R- GSM) to model the sparsity enforcing prior distribution for the solution.
The R-GSM prior encompasses a variety of heavy-tailed densities such as the
rectified Laplacian and rectified Student- t distributions with a proper choice
of the mixing density. We utilize the hierarchical representation induced by
the R-GSM prior and develop an evidence maximization framework based on the
Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate
the hyper-parameters and obtain a point estimate for the solution. We refer to
the proposed method as rectified sparse Bayesian learning (R-SBL). We provide
four R- SBL variants that offer a range of options for computational complexity
and the quality of the E-step computation. These methods include the Markov
chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate
message passing and a diagonal approximation. Using numerical experiments, we
show that the proposed R-SBL method outperforms existing S-NNLS solvers in
terms of both signal and support recovery performance, and is also very robust
against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin
- âŠ