13 research outputs found

    On Model-Based RIP-1 Matrices

    Get PDF
    The Restricted Isometry Property (RIP) is a fundamental property of a matrix enabling sparse recovery. Informally, an m x n matrix satisfies RIP of order k in the l_p norm if ||Ax||_p \approx ||x||_p for any vector x that is k-sparse, i.e., that has at most k non-zeros. The minimal number of rows m necessary for the property to hold has been extensively investigated, and tight bounds are known. Motivated by signal processing models, a recent work of Baraniuk et al has generalized this notion to the case where the support of x must belong to a given model, i.e., a given family of supports. This more general notion is much less understood, especially for norms other than l_2. In this paper we present tight bounds for the model-based RIP property in the l_1 norm. Our bounds hold for the two most frequently investigated models: tree-sparsity and block-sparsity. We also show implications of our results to sparse recovery problems.Comment: Version 3 corrects a few errors present in the earlier version. In particular, it states and proves correct upper and lower bounds for the number of rows in RIP-1 matrices for the block-sparse model. The bounds are of the form k log_b n, not k log_k n as stated in the earlier versio

    The Restricted Isometry Property of Subsampled Fourier Matrices

    Full text link
    A matrix ACq×NA \in \mathbb{C}^{q \times N} satisfies the restricted isometry property of order kk with constant ε\varepsilon if it preserves the 2\ell_2 norm of all kk-sparse vectors up to a factor of 1±ε1\pm \varepsilon. We prove that a matrix AA obtained by randomly sampling q=O(klog2klogN)q = O(k \cdot \log^2 k \cdot \log N) rows from an N×NN \times N Fourier matrix satisfies the restricted isometry property of order kk with a fixed ε\varepsilon with high probability. This improves on Rudelson and Vershynin (Comm. Pure Appl. Math., 2008), its subsequent improvements, and Bourgain (GAFA Seminar Notes, 2014).Comment: 16 page

    On the construction of sparse matrices from expander graphs

    Full text link
    We revisit the asymptotic analysis of probabilistic construction of adjacency matrices of expander graphs proposed in [4]. With better bounds we derived a new reduced sample complexity for the number of nonzeros per column of these matrices, precisely d=O(logs(N/s))d = \mathcal{O}\left(\log_s(N/s) \right); as opposed to the standard d=O(log(N/s))d = \mathcal{O}\left(\log(N/s) \right). This gives insights into why using small dd performed well in numerical experiments involving such matrices. Furthermore, we derive quantitative sampling theorems for our constructions which show our construction outperforming the existing state-of-the-art. We also used our results to compare performance of sparse recovery algorithms where these matrices are used for linear sketching.Comment: 28 pages, 4 figure

    Restricted Isometry Property for General p-Norms

    Get PDF
    The Restricted Isometry Property (RIP) is a fundamental property of a matrix which enables sparse recovery. Informally, an m×nm \times n matrix satisfies RIP of order kk for the p\ell_p norm, if Axpxp\|Ax\|_p \approx \|x\|_p for every vector xx with at most kk non-zero coordinates. For every 1p<1 \leq p < \infty we obtain almost tight bounds on the minimum number of rows mm necessary for the RIP property to hold. Prior to this work, only the cases p=1p = 1, 1+1/logk1 + 1 / \log k, and 22 were studied. Interestingly, our results show that the case p=2p = 2 is a "singularity" point: the optimal number of rows mm is Θ~(kp)\widetilde{\Theta}(k^{p}) for all p[1,){2}p\in [1,\infty)\setminus \{2\}, as opposed to Θ~(k)\widetilde{\Theta}(k) for k=2k=2. We also obtain almost tight bounds for the column sparsity of RIP matrices and discuss implications of our results for the Stable Sparse Recovery problem.Comment: An extended abstract of this paper is to appear at the 31st International Symposium on Computational Geometry (SoCG 2015

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Toward a unified theory of sparse dimensionality reduction in Euclidean space

    Get PDF
    Let ΦRm×n\Phi\in\mathbb{R}^{m\times n} be a sparse Johnson-Lindenstrauss transform [KN14] with ss non-zeroes per column. For a subset TT of the unit sphere, ε(0,1/2)\varepsilon\in(0,1/2) given, we study settings for m,sm,s required to ensure EΦsupxTΦx221<ε, \mathop{\mathbb{E}}_\Phi \sup_{x\in T} \left|\|\Phi x\|_2^2 - 1 \right| < \varepsilon , i.e. so that Φ\Phi preserves the norm of every xTx\in T simultaneously and multiplicatively up to 1+ε1+\varepsilon. We introduce a new complexity parameter, which depends on the geometry of TT, and show that it suffices to choose ss and mm such that this parameter is small. Our result is a sparse analog of Gordon's theorem, which was concerned with a dense Φ\Phi having i.i.d. Gaussian entries. We qualitatively unify several results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and Fourier-based restricted isometries. Our work also implies new results in using the sparse Johnson-Lindenstrauss transform in numerical linear algebra, classical and model-based compressed sensing, manifold learning, and constrained least squares problems such as the Lasso

    On the Construction of Sparse Matrices From Expander Graphs

    Get PDF
    We revisit the asymptotic analysis of probabilistic construction of adjacency matrices of expander graphs proposed in Bah and Tanner [1]. With better bounds we derived a new reduced sample complexity for d, the number of non-zeros per column of these matrices (or equivalently the left-degree of the underlying expander graph). Precisely d=O(logs(N/s)); as opposed to the standard d=O(log(N/s)), where N is the number of columns of the matrix (also the cardinality of set of left vertices of the expander graph) or the ambient dimension of the signals that can be sensed by such matrices. This gives insights into why using such sensing matrices with small d performed well in numerical compressed sensing experiments. Furthermore, we derive quantitative sampling theorems for our constructions which show our construction outperforming the existing state-of-the-art. We also used our results to compare performance of sparse recovery algorithms where these matrices are used for linear sketching
    corecore