2 research outputs found
Sparse Non-Negative Recovery from Biased Subgaussian Measurements using NNLS
We investigate non-negative least squares (NNLS) for the recovery of sparse
non-negative vectors from noisy linear and biased measurements. We build upon
recent results from [1] showing that for matrices whose row-span intersects the
positive orthant, the nullspace property (NSP) implies compressed sensing
recovery guarantees for NNLS. Such results are as good as for
-regularized estimators but do not require tuning parameters that
depend on the noise level. A bias in the sensing matrix improves this
auto-regularization feature of NNLS and the NSP then determines the sparse
recovery performance only. We show that NSP holds with high probability for
biased subgaussian matrices and its quality is independent of the bias.Comment: 8 pages, 3 figures (proofs simplified
Robust Recovery of Sparse Nonnegative Weights from Mixtures of Positive-Semidefinite Matrices
We consider a structured estimation problem where an observed matrix is
assumed to be generated as an -sparse linear combination of given
positive-semidefinite matrices. Recovering the unknown
-dimensional and -sparse weights from noisy observations is an important
problem in various fields of signal processing and also a relevant
pre-processing step in covariance estimation. We will present related recovery
guarantees and focus on the case of nonnegative weights. The problem is
formulated as a convex program and can be solved without further tuning. Such
robust, non-Bayesian and parameter-free approaches are important for
applications where prior distributions and further model parameters are
unknown. Motivated by explicit applications in wireless communication, we will
consider the particular rank-one case, where the known matrices are outer
products of iid. zero-mean subgaussian -dimensional complex vectors. We show
that, for given and , one can recover nonnegative --sparse weights
with a parameter-free convex program once . Our
error estimate scales linearly in the instantaneous noise power whereby the
convex algorithm does not need prior bounds on the noise. Such estimates are
important if the magnitude of the additive distortion depends on the unknown
itself.Comment: 13 pages; 3 figure