454 research outputs found

    Efficient and Robust Compressed Sensing using High-Quality Expander Graphs

    Get PDF
    Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any nn-dimensional vector that is kk-sparse (with kβ‰ͺnk\ll n) can be fully recovered using O(klog⁑nk)O(k\log\frac{n}{k}) measurements and only O(klog⁑n)O(k\log n) simple recovery iterations. In this paper we improve upon this result by considering expander graphs with expansion coefficient beyond 3/4 and show that, with the same number of measurements, only O(k)O(k) recovery iterations are required, which is a significant improvement when nn is large. In fact, full recovery can be accomplished by at most 2k2k very simple iterations. The number of iterations can be made arbitrarily close to kk, and the recovery algorithm can be implemented very efficiently using a simple binary search tree. We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expander-graph-based methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the recovery time complexity. Finally we will show how our analysis extends to give a robust algorithm that finds the position and sign of the kk significant elements of an almost kk-sparse signal and then, using very simple optimization techniques, finds in sublinear time a kk-sparse signal which approximates the original signal with very high precision

    Performance bounds for expander-based compressed sensing in Poisson noise

    Full text link
    This paper provides performance bounds for compressed sensing in the presence of Poisson noise using expander graphs. The Poisson noise model is appropriate for a variety of applications, including low-light imaging and digital streaming, where the signal-independent and/or bounded noise models used in the compressed sensing literature are no longer applicable. In this paper, we develop a novel sensing paradigm based on expander graphs and propose a MAP algorithm for recovering sparse or compressible signals from Poisson observations. The geometry of the expander graphs and the positivity of the corresponding sensing matrices play a crucial role in establishing the bounds on the signal reconstruction error of the proposed algorithm. We support our results with experimental demonstrations of reconstructing average packet arrival rates and instantaneous packet counts at a router in a communication network, where the arrivals of packets in each flow follow a Poisson process.Comment: revised version; accepted to IEEE Transactions on Signal Processin

    A robust parallel algorithm for combinatorial compressed sensing

    Full text link
    In previous work two of the authors have shown that a vector x∈Rnx \in \mathbb{R}^n with at most k<nk < n nonzeros can be recovered from an expander sketch AxAx in O(nnz(A)log⁑k)\mathcal{O}(\mathrm{nnz}(A)\log k) operations via the Parallel-β„“0\ell_0 decoding algorithm, where nnz(A)\mathrm{nnz}(A) denotes the number of nonzero entries in A∈RmΓ—nA \in \mathbb{R}^{m \times n}. In this paper we present the Robust-β„“0\ell_0 decoding algorithm, which robustifies Parallel-β„“0\ell_0 when the sketch AxAx is corrupted by additive noise. This robustness is achieved by approximating the asymptotic posterior distribution of values in the sketch given its corrupted measurements. We provide analytic expressions that approximate these posteriors under the assumptions that the nonzero entries in the signal and the noise are drawn from continuous distributions. Numerical experiments presented show that Robust-β„“0\ell_0 is superior to existing greedy and combinatorial compressed sensing algorithms in the presence of small to moderate signal-to-noise ratios in the setting of Gaussian signals and Gaussian additive noise

    Expander β„“0\ell_0-Decoding

    Get PDF
    We introduce two new algorithms, Serial-β„“0\ell_0 and Parallel-β„“0\ell_0 for solving a large underdetermined linear system of equations y=Ax∈Rmy = Ax \in \mathbb{R}^m when it is known that x∈Rnx \in \mathbb{R}^n has at most k<mk < m nonzero entries and that AA is the adjacency matrix of an unbalanced left dd-regular expander graph. The matrices in this class are sparse and allow a highly efficient implementation. A number of algorithms have been designed to work exclusively under this setting, composing the branch of combinatorial compressed-sensing (CCS). Serial-β„“0\ell_0 and Parallel-β„“0\ell_0 iteratively minimise βˆ₯yβˆ’Ax^βˆ₯0\|y - A\hat x\|_0 by successfully combining two desirable features of previous CCS algorithms: the information-preserving strategy of ER, and the parallel updating mechanism of SMP. We are able to link these elements and guarantee convergence in O(dnlog⁑k)\mathcal{O}(dn \log k) operations by assuming that the signal is dissociated, meaning that all of the 2k2^k subset sums of the support of xx are pairwise different. However, we observe empirically that the signal need not be exactly dissociated in practice. Moreover, we observe Serial-β„“0\ell_0 and Parallel-β„“0\ell_0 to be able to solve large scale problems with a larger fraction of nonzeros than other algorithms when the number of measurements is substantially less than the signal length; in particular, they are able to reliably solve for a kk-sparse vector x∈Rnx\in\mathbb{R}^n from mm expander measurements with n/m=103n/m=10^3 and k/mk/m up to four times greater than what is achievable by β„“1\ell_1-regularization from dense Gaussian measurements. Additionally, Serial-β„“0\ell_0 and Parallel-β„“0\ell_0 are observed to be able to solve large problems sizes in substantially less time than other algorithms for compressed sensing. In particular, Parallel-β„“0\ell_0 is structured to take advantage of massively parallel architectures.Comment: 14 pages, 10 figure

    Sparse Recovery of Positive Signals with Minimal Expansion

    Get PDF
    We investigate the sparse recovery problem of reconstructing a high-dimensional non-negative sparse vector from lower dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes are crucial in applications, such as DNA microarrays and sensor networks, where dense measurements are not practically feasible. One possible construction uses the adjacency matrices of expander graphs, which often leads to recovery algorithms much more efficient than β„“1\ell_1 minimization. However, to date, constructions based on expanders have required very high expansion coefficients which can potentially make the construction of such graphs difficult and the size of the recoverable sets small. In this paper, we construct sparse measurement matrices for the recovery of non-negative vectors, using perturbations of the adjacency matrix of an expander graph with much smaller expansion coefficient. We present a necessary and sufficient condition for β„“1\ell_1 optimization to successfully recover the unknown vector and obtain expressions for the recovery threshold. For certain classes of measurement matrices, this necessary and sufficient condition is further equivalent to the existence of a "unique" vector in the constraint set, which opens the door to alternative algorithms to β„“1\ell_1 minimization. We further show that the minimal expansion we use is necessary for any graph for which sparse recovery is possible and that therefore our construction is tight. We finally present a novel recovery algorithm that exploits expansion and is much faster than β„“1\ell_1 optimization. Finally, we demonstrate through theoretical bounds, as well as simulation, that our method is robust to noise and approximate sparsity.Comment: 25 pages, submitted for publicatio

    Vanishingly Sparse Matrices and Expander Graphs, With Application to Compressed Sensing

    Full text link
    We revisit the probabilistic construction of sparse random matrices where each column has a fixed number of nonzeros whose row indices are drawn uniformly at random with replacement. These matrices have a one-to-one correspondence with the adjacency matrices of fixed left degree expander graphs. We present formulae for the expected cardinality of the set of neighbors for these graphs, and present tail bounds on the probability that this cardinality will be less than the expected value. Deducible from these bounds are similar bounds for the expansion of the graph which is of interest in many applications. These bounds are derived through a more detailed analysis of collisions in unions of sets. Key to this analysis is a novel {\em dyadic splitting} technique. The analysis led to the derivation of better order constants that allow for quantitative theorems on existence of lossless expander graphs and hence the sparse random matrices we consider and also quantitative compressed sensing sampling theorems when using sparse non mean-zero measurement matrices.Comment: 17 pages, 12 Postscript figure
    • …
    corecore