148 research outputs found

    Vanishingly Sparse Matrices and Expander Graphs, With Application to Compressed Sensing

    Full text link
    We revisit the probabilistic construction of sparse random matrices where each column has a fixed number of nonzeros whose row indices are drawn uniformly at random with replacement. These matrices have a one-to-one correspondence with the adjacency matrices of fixed left degree expander graphs. We present formulae for the expected cardinality of the set of neighbors for these graphs, and present tail bounds on the probability that this cardinality will be less than the expected value. Deducible from these bounds are similar bounds for the expansion of the graph which is of interest in many applications. These bounds are derived through a more detailed analysis of collisions in unions of sets. Key to this analysis is a novel {\em dyadic splitting} technique. The analysis led to the derivation of better order constants that allow for quantitative theorems on existence of lossless expander graphs and hence the sparse random matrices we consider and also quantitative compressed sensing sampling theorems when using sparse non mean-zero measurement matrices.Comment: 17 pages, 12 Postscript figure

    On the construction of sparse matrices from expander graphs

    Full text link
    We revisit the asymptotic analysis of probabilistic construction of adjacency matrices of expander graphs proposed in [4]. With better bounds we derived a new reduced sample complexity for the number of nonzeros per column of these matrices, precisely d=O(log⁑s(N/s))d = \mathcal{O}\left(\log_s(N/s) \right); as opposed to the standard d=O(log⁑(N/s))d = \mathcal{O}\left(\log(N/s) \right). This gives insights into why using small dd performed well in numerical experiments involving such matrices. Furthermore, we derive quantitative sampling theorems for our constructions which show our construction outperforming the existing state-of-the-art. We also used our results to compare performance of sparse recovery algorithms where these matrices are used for linear sketching.Comment: 28 pages, 4 figure

    Bounds of restricted isometry constants in extreme asymptotics: formulae for Gaussian matrices

    Full text link
    Restricted Isometry Constants (RICs) provide a measure of how far from an isometry a matrix can be when acting on sparse vectors. This, and related quantities, provide a mechanism by which standard eigen-analysis can be applied to topics relying on sparsity. RIC bounds have been presented for a variety of random matrices and matrix dimension and sparsity ranges. We provide explicitly formulae for RIC bounds, of n by N Gaussian matrices with sparsity k, in three settings: a) n/N fixed and k/n approaching zero, b) k/n fixed and n/N approaching zero, and c) n/N approaching zero with k/n decaying inverse logrithmically in N/n; in these three settings the RICs a) decay to zero, b) become unbounded (or approach inherent bounds), and c) approach a non-zero constant. Implications of these results for RIC based analysis of compressed sensing algorithms are presented.Comment: 40 pages, 5 figure

    Counting faces of randomly-projected polytopes when the projection radically lowers dimension

    Full text link
    This paper develops asymptotic methods to count faces of random high-dimensional polytopes. Beyond its intrinsic interest, our conclusions have surprising implications - in statistics, probability, information theory, and signal processing - with potential impacts in practical subjects like medical imaging and digital communications. Three such implications concern: convex hulls of Gaussian point clouds, signal recovery from random projections, and how many gross errors can be efficiently corrected from Gaussian error correcting codes.Comment: 56 page

    Expander β„“0\ell_0-Decoding

    Get PDF
    We introduce two new algorithms, Serial-β„“0\ell_0 and Parallel-β„“0\ell_0 for solving a large underdetermined linear system of equations y=Ax∈Rmy = Ax \in \mathbb{R}^m when it is known that x∈Rnx \in \mathbb{R}^n has at most k<mk < m nonzero entries and that AA is the adjacency matrix of an unbalanced left dd-regular expander graph. The matrices in this class are sparse and allow a highly efficient implementation. A number of algorithms have been designed to work exclusively under this setting, composing the branch of combinatorial compressed-sensing (CCS). Serial-β„“0\ell_0 and Parallel-β„“0\ell_0 iteratively minimise βˆ₯yβˆ’Ax^βˆ₯0\|y - A\hat x\|_0 by successfully combining two desirable features of previous CCS algorithms: the information-preserving strategy of ER, and the parallel updating mechanism of SMP. We are able to link these elements and guarantee convergence in O(dnlog⁑k)\mathcal{O}(dn \log k) operations by assuming that the signal is dissociated, meaning that all of the 2k2^k subset sums of the support of xx are pairwise different. However, we observe empirically that the signal need not be exactly dissociated in practice. Moreover, we observe Serial-β„“0\ell_0 and Parallel-β„“0\ell_0 to be able to solve large scale problems with a larger fraction of nonzeros than other algorithms when the number of measurements is substantially less than the signal length; in particular, they are able to reliably solve for a kk-sparse vector x∈Rnx\in\mathbb{R}^n from mm expander measurements with n/m=103n/m=10^3 and k/mk/m up to four times greater than what is achievable by β„“1\ell_1-regularization from dense Gaussian measurements. Additionally, Serial-β„“0\ell_0 and Parallel-β„“0\ell_0 are observed to be able to solve large problems sizes in substantially less time than other algorithms for compressed sensing. In particular, Parallel-β„“0\ell_0 is structured to take advantage of massively parallel architectures.Comment: 14 pages, 10 figure

    Performance Comparisons of Greedy Algorithms in Compressed Sensing

    Get PDF
    Compressed sensing has motivated the development of numerous sparse approximation algorithms designed to return a solution to an underdetermined system of linear equations where the solution has the fewest number of nonzeros possible, referred to as the sparsest solution. In the compressed sensing setting, greedy sparse approximation algorithms have been observed to be both able to recovery the sparsest solution for similar problem sizes as other algorithms and to be computationally efficient; however, little theory is known for their average case behavior. We conduct a large scale empirical investigation into the behavior of three of the state of the art greedy algorithms: NIHT, HTP, and CSMPSP. The investigation considers a variety of random classes of linear systems. The regions of the problem size in which each algorithm is able to reliably recovery the sparsest solution is accurately determined, and throughout this region additional performance characteristics are presented. Contrasting the recovery regions and average computational time for each algorithm we present algorithm selection maps which indicate, for each problem size, which algorithm is able to reliably recovery the sparsest vector in the least amount of time. Though no one algorithm is observed to be uniformly superior, NIHT is observed to have an advantageous balance of large recovery region, absolute recovery time, and robustness of these properties to additive noise and for a variety of problem classes. The algorithm selection maps presented here are the first of their kind for compressed sensing

    A robust parallel algorithm for combinatorial compressed sensing

    Full text link
    In previous work two of the authors have shown that a vector x∈Rnx \in \mathbb{R}^n with at most k<nk < n nonzeros can be recovered from an expander sketch AxAx in O(nnz(A)log⁑k)\mathcal{O}(\mathrm{nnz}(A)\log k) operations via the Parallel-β„“0\ell_0 decoding algorithm, where nnz(A)\mathrm{nnz}(A) denotes the number of nonzero entries in A∈RmΓ—nA \in \mathbb{R}^{m \times n}. In this paper we present the Robust-β„“0\ell_0 decoding algorithm, which robustifies Parallel-β„“0\ell_0 when the sketch AxAx is corrupted by additive noise. This robustness is achieved by approximating the asymptotic posterior distribution of values in the sketch given its corrupted measurements. We provide analytic expressions that approximate these posteriors under the assumptions that the nonzero entries in the signal and the noise are drawn from continuous distributions. Numerical experiments presented show that Robust-β„“0\ell_0 is superior to existing greedy and combinatorial compressed sensing algorithms in the presence of small to moderate signal-to-noise ratios in the setting of Gaussian signals and Gaussian additive noise
    • …
    corecore