159 research outputs found
On the construction of sparse matrices from expander graphs
We revisit the asymptotic analysis of probabilistic construction of adjacency
matrices of expander graphs proposed in [4]. With better bounds we derived a
new reduced sample complexity for the number of nonzeros per column of these
matrices, precisely ; as opposed to
the standard . This gives insights into
why using small performed well in numerical experiments involving such
matrices. Furthermore, we derive quantitative sampling theorems for our
constructions which show our construction outperforming the existing
state-of-the-art. We also used our results to compare performance of sparse
recovery algorithms where these matrices are used for linear sketching.Comment: 28 pages, 4 figure
Vanishingly Sparse Matrices and Expander Graphs, With Application to Compressed Sensing
We revisit the probabilistic construction of sparse random matrices where
each column has a fixed number of nonzeros whose row indices are drawn
uniformly at random with replacement. These matrices have a one-to-one
correspondence with the adjacency matrices of fixed left degree expander
graphs. We present formulae for the expected cardinality of the set of
neighbors for these graphs, and present tail bounds on the probability that
this cardinality will be less than the expected value. Deducible from these
bounds are similar bounds for the expansion of the graph which is of interest
in many applications. These bounds are derived through a more detailed analysis
of collisions in unions of sets. Key to this analysis is a novel {\em dyadic
splitting} technique. The analysis led to the derivation of better order
constants that allow for quantitative theorems on existence of lossless
expander graphs and hence the sparse random matrices we consider and also
quantitative compressed sensing sampling theorems when using sparse non
mean-zero measurement matrices.Comment: 17 pages, 12 Postscript figure
Bounds of restricted isometry constants in extreme asymptotics: formulae for Gaussian matrices
Restricted Isometry Constants (RICs) provide a measure of how far from an
isometry a matrix can be when acting on sparse vectors. This, and related
quantities, provide a mechanism by which standard eigen-analysis can be applied
to topics relying on sparsity. RIC bounds have been presented for a variety of
random matrices and matrix dimension and sparsity ranges. We provide explicitly
formulae for RIC bounds, of n by N Gaussian matrices with sparsity k, in three
settings: a) n/N fixed and k/n approaching zero, b) k/n fixed and n/N
approaching zero, and c) n/N approaching zero with k/n decaying inverse
logrithmically in N/n; in these three settings the RICs a) decay to zero, b)
become unbounded (or approach inherent bounds), and c) approach a non-zero
constant. Implications of these results for RIC based analysis of compressed
sensing algorithms are presented.Comment: 40 pages, 5 figure
Counting faces of randomly-projected polytopes when the projection radically lowers dimension
This paper develops asymptotic methods to count faces of random
high-dimensional polytopes. Beyond its intrinsic interest, our conclusions have
surprising implications - in statistics, probability, information theory, and
signal processing - with potential impacts in practical subjects like medical
imaging and digital communications. Three such implications concern: convex
hulls of Gaussian point clouds, signal recovery from random projections, and
how many gross errors can be efficiently corrected from Gaussian error
correcting codes.Comment: 56 page
Expander -Decoding
We introduce two new algorithms, Serial- and Parallel- for
solving a large underdetermined linear system of equations when it is known that has at most
nonzero entries and that is the adjacency matrix of an unbalanced left
-regular expander graph. The matrices in this class are sparse and allow a
highly efficient implementation. A number of algorithms have been designed to
work exclusively under this setting, composing the branch of combinatorial
compressed-sensing (CCS).
Serial- and Parallel- iteratively minimise by successfully combining two desirable features of previous CCS
algorithms: the information-preserving strategy of ER, and the parallel
updating mechanism of SMP. We are able to link these elements and guarantee
convergence in operations by assuming that the signal
is dissociated, meaning that all of the subset sums of the support of
are pairwise different. However, we observe empirically that the signal need
not be exactly dissociated in practice. Moreover, we observe Serial-
and Parallel- to be able to solve large scale problems with a larger
fraction of nonzeros than other algorithms when the number of measurements is
substantially less than the signal length; in particular, they are able to
reliably solve for a -sparse vector from expander
measurements with and up to four times greater than what is
achievable by -regularization from dense Gaussian measurements.
Additionally, Serial- and Parallel- are observed to be able to
solve large problems sizes in substantially less time than other algorithms for
compressed sensing. In particular, Parallel- is structured to take
advantage of massively parallel architectures.Comment: 14 pages, 10 figure
Performance Comparisons of Greedy Algorithms in Compressed Sensing
Compressed sensing has motivated the development of numerous sparse approximation algorithms designed to return a solution to an underdetermined system of linear equations where the solution has the fewest number of nonzeros possible, referred to as the sparsest solution. In the compressed sensing setting, greedy sparse approximation algorithms have been observed to be both able to recovery the sparsest solution for similar problem sizes as other algorithms and to be computationally efficient; however, little theory is known for their average case behavior. We conduct a large scale empirical investigation into the behavior of three of the state of the art greedy algorithms: NIHT, HTP, and CSMPSP. The investigation considers a variety of random classes of linear systems. The regions of the problem size in which each algorithm is able to reliably recovery the sparsest solution is accurately determined, and throughout this region additional performance characteristics are presented. Contrasting the recovery regions and average computational time for each algorithm we present algorithm selection maps which indicate, for each problem size, which algorithm is able to reliably recovery the sparsest vector in the least amount of time. Though no one algorithm is observed to be uniformly superior, NIHT is observed to have an advantageous balance of large recovery region, absolute recovery time, and robustness of these properties to additive noise and for a variety of problem classes. The algorithm selection maps presented here are the first of their kind for compressed sensing
- β¦