1,132 research outputs found

    An empirical eigenvalue-threshold test for sparsity level estimation from compressed measurements

    Get PDF
    Compressed sensing allows for a significant reduction of the number of measurements when the signal of interest is of a sparse nature. Most computationally efficient algorithms for signal recovery rely on some knowledge of the sparsity level, i.e., the number of non-zero elements. However, the sparsity level is often not known a priori and can even vary with time. In this contribution we show that it is possible to estimate the sparsity level directly in the compressed domain, provided that multiple independent observations are available. In fact, one can use classical model order selection algorithms for this purpose. Nevertheless, due to the influence of the measurement process they may not perform satisfactorily in the compressed sensing setup. To overcome this drawback, we propose an approach which exploits the empirical distributions of the noise eigenvalues. We demonstrate its superior performance compared to state-of-the-art model order estimation algorithms numerically. © 2014 EURASIP

    Sparsity Order Estimation from a Single Compressed Observation Vector

    Full text link
    We investigate the problem of estimating the unknown degree of sparsity from compressive measurements without the need to carry out a sparse recovery step. While the sparsity order can be directly inferred from the effective rank of the observation matrix in the multiple snapshot case, this appears to be impossible in the more challenging single snapshot case. We show that specially designed measurement matrices allow to rearrange the measurement vector into a matrix such that its effective rank coincides with the effective sparsity order. In fact, we prove that matrices which are composed of a Khatri-Rao product of smaller matrices generate measurements that allow to infer the sparsity order. Moreover, if some samples are used more than once, one of the matrices needs to be Vandermonde. These structural constraints reduce the degrees of freedom in choosing the measurement matrix which may incur in a degradation in the achievable coherence. We thus also address suitable choices of the measurement matrices. In particular, we analyze Khatri-Rao and Vandermonde matrices in terms of their coherence and provide a new design for Vandermonde matrices that achieves a low coherence

    Sparse Signal Recovery under Poisson Statistics

    Full text link
    We are motivated by problems that arise in a number of applications such as Online Marketing and explosives detection, where the observations are usually modeled using Poisson statistics. We model each observation as a Poisson random variable whose mean is a sparse linear superposition of known patterns. Unlike many conventional problems observations here are not identically distributed since they are associated with different sensing modalities. We analyze the performance of a Maximum Likelihood (ML) decoder, which for our Poisson setting involves a non-linear optimization but yet is computationally tractable. We derive fundamental sample complexity bounds for sparse recovery when the measurements are contaminated with Poisson noise. In contrast to the least-squares linear regression setting with Gaussian noise, we observe that in addition to sparsity, the scale of the parameters also fundamentally impacts sample complexity. We introduce a novel notion of Restricted Likelihood Perturbation (RLP), to jointly account for scale and sparsity. We derive sample complexity bounds for 1\ell_1 regularized ML estimators in terms of RLP and further specialize these results for deterministic and random sensing matrix designs.Comment: 13 pages, 11 figures, 2 tables, submitted to IEEE Transactions on Signal Processin

    Subspace Methods for Joint Sparse Recovery

    Full text link
    We propose robust and efficient algorithms for the joint sparse recovery problem in compressed sensing, which simultaneously recover the supports of jointly sparse signals from their multiple measurement vectors obtained through a common sensing matrix. In a favorable situation, the unknown matrix, which consists of the jointly sparse signals, has linearly independent nonzero rows. In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally proposed by Schmidt for the direction of arrival problem in sensor array processing and later proposed and analyzed for joint sparse recovery by Feng and Bresler, provides a guarantee with the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank-defect or ill-conditioning. This situation arises with limited number of measurement vectors, or with highly correlated signal components. In this case MUSIC fails, and in practice none of the existing methods can consistently approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC), which improves on MUSIC so that the support is reliably recovered under such unfavorable conditions. Combined with subspace-based greedy algorithms also proposed and analyzed in this paper, SA-MUSIC provides a computationally efficient algorithm with a performance guarantee. The performance guarantees are given in terms of a version of restricted isometry property. In particular, we also present a non-asymptotic perturbation analysis of the signal subspace estimation that has been missing in the previous study of MUSIC.Comment: submitted to IEEE transactions on Information Theory, revised versio

    Scalable network-wide anomaly detection using compressed data

    Get PDF
    Detecting network traffic volume anomalies in real time is a key problem as it enables measures to be taken to prevent network congestion which severely affects the end users. Several techniques based on principal component analysis (PCA) have been outlined in the past which detect volume anomalies as outliers in the residual subspace. However, these methods are not scalable to networks with a large number of links. We address this scalability issue with a new approach inspired from the recently developed compressed sensing (CS) theory. This theory induces a universal information sampling sheme right at the network sensory level to reduce the data overhead. Specifically, we address exploit the compressibility characteristics of the network data and describe a framework for anomaly detection in the compressed domain. Our main theoretical contribution is a detailed theoretical analysis of the new approach which obtains the probabilistic bounds on the principal eigenvalues of the compressed data. Subsequently, we prove that volume anomaly detection using compressed data can achieve equivalent performance as it does using the original uncompressed and reduces the computational cost significantly. The experimental results on both the Abiliene and synthetic datasets support our theoretical findings and demonstrate the advantages of the new approach over the existing methods

    Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization

    Full text link
    Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure for finding sparse solutions of underdetermined linear systems. This method has been shown to have strong theoretical guarantee and impressive numerical performance. In this paper, we generalize HTP from compressive sensing to a generic problem setup of sparsity-constrained convex optimization. The proposed algorithm iterates between a standard gradient descent step and a hard thresholding step with or without debiasing. We prove that our method enjoys the strong guarantees analogous to HTP in terms of rate of convergence and parameter estimation accuracy. Numerical evidences show that our method is superior to the state-of-the-art greedy selection methods in sparse logistic regression and sparse precision matrix estimation tasks
    corecore