13 research outputs found
On Model-Based RIP-1 Matrices
The Restricted Isometry Property (RIP) is a fundamental property of a matrix
enabling sparse recovery. Informally, an m x n matrix satisfies RIP of order k
in the l_p norm if ||Ax||_p \approx ||x||_p for any vector x that is k-sparse,
i.e., that has at most k non-zeros. The minimal number of rows m necessary for
the property to hold has been extensively investigated, and tight bounds are
known. Motivated by signal processing models, a recent work of Baraniuk et al
has generalized this notion to the case where the support of x must belong to a
given model, i.e., a given family of supports. This more general notion is much
less understood, especially for norms other than l_2. In this paper we present
tight bounds for the model-based RIP property in the l_1 norm. Our bounds hold
for the two most frequently investigated models: tree-sparsity and
block-sparsity. We also show implications of our results to sparse recovery
problems.Comment: Version 3 corrects a few errors present in the earlier version. In
particular, it states and proves correct upper and lower bounds for the
number of rows in RIP-1 matrices for the block-sparse model. The bounds are
of the form k log_b n, not k log_k n as stated in the earlier versio
The Restricted Isometry Property of Subsampled Fourier Matrices
A matrix satisfies the restricted isometry
property of order with constant if it preserves the
norm of all -sparse vectors up to a factor of . We prove
that a matrix obtained by randomly sampling rows from an Fourier matrix satisfies the restricted
isometry property of order with a fixed with high
probability. This improves on Rudelson and Vershynin (Comm. Pure Appl. Math.,
2008), its subsequent improvements, and Bourgain (GAFA Seminar Notes, 2014).Comment: 16 page
On the construction of sparse matrices from expander graphs
We revisit the asymptotic analysis of probabilistic construction of adjacency
matrices of expander graphs proposed in [4]. With better bounds we derived a
new reduced sample complexity for the number of nonzeros per column of these
matrices, precisely ; as opposed to
the standard . This gives insights into
why using small performed well in numerical experiments involving such
matrices. Furthermore, we derive quantitative sampling theorems for our
constructions which show our construction outperforming the existing
state-of-the-art. We also used our results to compare performance of sparse
recovery algorithms where these matrices are used for linear sketching.Comment: 28 pages, 4 figure
Restricted Isometry Property for General p-Norms
The Restricted Isometry Property (RIP) is a fundamental property of a matrix
which enables sparse recovery. Informally, an matrix satisfies RIP
of order for the norm, if for every
vector with at most non-zero coordinates.
For every we obtain almost tight bounds on the minimum
number of rows necessary for the RIP property to hold. Prior to this work,
only the cases , , and were studied. Interestingly,
our results show that the case is a "singularity" point: the optimal
number of rows is for all , as opposed to for .
We also obtain almost tight bounds for the column sparsity of RIP matrices
and discuss implications of our results for the Stable Sparse Recovery problem.Comment: An extended abstract of this paper is to appear at the 31st
International Symposium on Computational Geometry (SoCG 2015
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Toward a unified theory of sparse dimensionality reduction in Euclidean space
Let be a sparse Johnson-Lindenstrauss
transform [KN14] with non-zeroes per column. For a subset of the unit
sphere, given, we study settings for required to
ensure i.e. so that preserves the norm of every
simultaneously and multiplicatively up to . We
introduce a new complexity parameter, which depends on the geometry of , and
show that it suffices to choose and such that this parameter is small.
Our result is a sparse analog of Gordon's theorem, which was concerned with a
dense having i.i.d. Gaussian entries. We qualitatively unify several
results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and
Fourier-based restricted isometries. Our work also implies new results in using
the sparse Johnson-Lindenstrauss transform in numerical linear algebra,
classical and model-based compressed sensing, manifold learning, and
constrained least squares problems such as the Lasso
On the Construction of Sparse Matrices From Expander Graphs
We revisit the asymptotic analysis of probabilistic construction of adjacency matrices of expander graphs proposed in Bah and Tanner [1]. With better bounds we derived a new reduced sample complexity for d, the number of non-zeros per column of these matrices (or equivalently the left-degree of the underlying expander graph). Precisely d=O(logs(N/s)); as opposed to the standard d=O(log(N/s)), where N is the number of columns of the matrix (also the cardinality of set of left vertices of the expander graph) or the ambient dimension of the signals that can be sensed by such matrices. This gives insights into why using such sensing matrices with small d performed well in numerical compressed sensing experiments. Furthermore, we derive quantitative sampling theorems for our constructions which show our construction outperforming the existing state-of-the-art. We also used our results to compare performance of sparse recovery algorithms where these matrices are used for linear sketching