2 research outputs found

    Sparse Non-Negative Recovery from Biased Subgaussian Measurements using NNLS

    Full text link
    We investigate non-negative least squares (NNLS) for the recovery of sparse non-negative vectors from noisy linear and biased measurements. We build upon recent results from [1] showing that for matrices whose row-span intersects the positive orthant, the nullspace property (NSP) implies compressed sensing recovery guarantees for NNLS. Such results are as good as for β„“1\ell_1-regularized estimators but do not require tuning parameters that depend on the noise level. A bias in the sensing matrix improves this auto-regularization feature of NNLS and the NSP then determines the sparse recovery performance only. We show that NSP holds with high probability for biased subgaussian matrices and its quality is independent of the bias.Comment: 8 pages, 3 figures (proofs simplified

    Robust Recovery of Sparse Nonnegative Weights from Mixtures of Positive-Semidefinite Matrices

    Full text link
    We consider a structured estimation problem where an observed matrix is assumed to be generated as an ss-sparse linear combination of NN given nΓ—nn\times n positive-semidefinite matrices. Recovering the unknown NN-dimensional and ss-sparse weights from noisy observations is an important problem in various fields of signal processing and also a relevant pre-processing step in covariance estimation. We will present related recovery guarantees and focus on the case of nonnegative weights. The problem is formulated as a convex program and can be solved without further tuning. Such robust, non-Bayesian and parameter-free approaches are important for applications where prior distributions and further model parameters are unknown. Motivated by explicit applications in wireless communication, we will consider the particular rank-one case, where the known matrices are outer products of iid. zero-mean subgaussian nn-dimensional complex vectors. We show that, for given nn and NN, one can recover nonnegative ss--sparse weights with a parameter-free convex program once s≀O(n2/log⁑2(N/n2)s\leq O(n^2 / \log^2(N/n^2). Our error estimate scales linearly in the instantaneous noise power whereby the convex algorithm does not need prior bounds on the noise. Such estimates are important if the magnitude of the additive distortion depends on the unknown itself.Comment: 13 pages; 3 figure
    corecore