407 research outputs found
Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem
In this paper, we develop a Bayesian evidence maximization framework to solve
the sparse non-negative least squares (S-NNLS) problem. We introduce a family
of probability densities referred to as the Rectified Gaussian Scale Mixture
(R- GSM) to model the sparsity enforcing prior distribution for the solution.
The R-GSM prior encompasses a variety of heavy-tailed densities such as the
rectified Laplacian and rectified Student- t distributions with a proper choice
of the mixing density. We utilize the hierarchical representation induced by
the R-GSM prior and develop an evidence maximization framework based on the
Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate
the hyper-parameters and obtain a point estimate for the solution. We refer to
the proposed method as rectified sparse Bayesian learning (R-SBL). We provide
four R- SBL variants that offer a range of options for computational complexity
and the quality of the E-step computation. These methods include the Markov
chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate
message passing and a diagonal approximation. Using numerical experiments, we
show that the proposed R-SBL method outperforms existing S-NNLS solvers in
terms of both signal and support recovery performance, and is also very robust
against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin
Bayesian Compressed Regression
As an alternative to variable selection or shrinkage in high dimensional
regression, we propose to randomly compress the predictors prior to analysis.
This dramatically reduces storage and computational bottlenecks, performing
well when the predictors can be projected to a low dimensional linear subspace
with minimal loss of information about the response. As opposed to existing
Bayesian dimensionality reduction approaches, the exact posterior distribution
conditional on the compressed data is available analytically, speeding up
computation by many orders of magnitude while also bypassing robustness issues
due to convergence and mixing problems with MCMC. Model averaging is used to
reduce sensitivity to the random projection matrix, while accommodating
uncertainty in the subspace dimension. Strong theoretical support is provided
for the approach by showing near parametric convergence rates for the
predictive density in the large p small n asymptotic paradigm. Practical
performance relative to competitors is illustrated in simulations and real data
applications.Comment: 29 pages, 4 figure
Bayesian Estimation for Continuous-Time Sparse Stochastic Processes
We consider continuous-time sparse stochastic processes from which we have
only a finite number of noisy/noiseless samples. Our goal is to estimate the
noiseless samples (denoising) and the signal in-between (interpolation
problem).
By relying on tools from the theory of splines, we derive the joint a priori
distribution of the samples and show how this probability density function can
be factorized. The factorization enables us to tractably implement the maximum
a posteriori and minimum mean-square error (MMSE) criteria as two statistical
approaches for estimating the unknowns. We compare the derived statistical
methods with well-known techniques for the recovery of sparse signals, such as
the norm and Log (- relaxation) regularization
methods. The simulation results show that, under certain conditions, the
performance of the regularization techniques can be very close to that of the
MMSE estimator.Comment: To appear in IEEE TS
Exploring multimodal data fusion through joint decompositions with flexible couplings
A Bayesian framework is proposed to define flexible coupling models for joint
tensor decompositions of multiple data sets. Under this framework, a natural
formulation of the data fusion problem is to cast it in terms of a joint
maximum a posteriori (MAP) estimator. Data driven scenarios of joint posterior
distributions are provided, including general Gaussian priors and non Gaussian
coupling priors. We present and discuss implementation issues of algorithms
used to obtain the joint MAP estimator. We also show how this framework can be
adapted to tackle the problem of joint decompositions of large datasets. In the
case of a conditional Gaussian coupling with a linear transformation, we give
theoretical bounds on the data fusion performance using the Bayesian Cramer-Rao
bound. Simulations are reported for hybrid coupling models ranging from simple
additive Gaussian models, to Gamma-type models with positive variables and to
the coupling of data sets which are inherently of different size due to
different resolution of the measurement devices.Comment: 15 pages, 7 figures, revised versio
- …