27,900 research outputs found
Sparse Recovery of Positive Signals with Minimal Expansion
We investigate the sparse recovery problem of reconstructing a
high-dimensional non-negative sparse vector from lower dimensional linear
measurements. While much work has focused on dense measurement matrices, sparse
measurement schemes are crucial in applications, such as DNA microarrays and
sensor networks, where dense measurements are not practically feasible. One
possible construction uses the adjacency matrices of expander graphs, which
often leads to recovery algorithms much more efficient than
minimization. However, to date, constructions based on expanders have required
very high expansion coefficients which can potentially make the construction of
such graphs difficult and the size of the recoverable sets small.
In this paper, we construct sparse measurement matrices for the recovery of
non-negative vectors, using perturbations of the adjacency matrix of an
expander graph with much smaller expansion coefficient. We present a necessary
and sufficient condition for optimization to successfully recover the
unknown vector and obtain expressions for the recovery threshold. For certain
classes of measurement matrices, this necessary and sufficient condition is
further equivalent to the existence of a "unique" vector in the constraint set,
which opens the door to alternative algorithms to minimization. We
further show that the minimal expansion we use is necessary for any graph for
which sparse recovery is possible and that therefore our construction is tight.
We finally present a novel recovery algorithm that exploits expansion and is
much faster than optimization. Finally, we demonstrate through
theoretical bounds, as well as simulation, that our method is robust to noise
and approximate sparsity.Comment: 25 pages, submitted for publicatio
Sparse Recovery of Nonnegative Signals With Minimal Expansion
We investigate the problem of reconstructing a high-dimensional nonnegative sparse vector from lower-dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes can be more efficient both with respect to signal sensing as well as reconstruction complexity. Known constructions use the adjacency matrices of expander graphs, which often lead to recovery algorithms which are much more efficient than l_1 minimization. However, prior constructions of sparse measurement matrices rely on expander graphs with very high expansion coefficients which make the construction of such graphs difficult and the size of the recoverable sets very small. In this paper, we introduce sparse measurement matrices for the recovery of nonnegative vectors, using perturbations of the adjacency matrices of expander graphs requiring much smaller expansion coefficients, hereby referred to as minimal expanders. We show that when l_1 minimization is used as the reconstruction method, these constructions allow the recovery of signals that are almost three orders of magnitude larger compared to the existing theoretical results for sparse measurement matrices. We provide for the first time tight upper bounds for the so called weak and strong recovery thresholds when l_1 minimization is used. We further show that the success of l_1 optimization is equivalent to the existence of a βuniqueβ vector in the set of solutions to the linear equations, which enables alternative algorithms for l_1 minimization. We further show that the defined minimal expansion property is necessary for all measurement matrices for compressive sensing, (even when the non-negativity assumption is removed) therefore implying that our construction is tight. We finally present a novel recovery algorithm that exploits expansion and is much more computationally efficient compared to l_1 minimization
The dynamics of message passing on dense graphs, with applications to compressed sensing
Approximate message passing algorithms proved to be extremely effective in
reconstructing sparse signals from a small number of incoherent linear
measurements. Extensive numerical experiments further showed that their
dynamics is accurately tracked by a simple one-dimensional iteration termed
state evolution. In this paper we provide the first rigorous foundation to
state evolution. We prove that indeed it holds asymptotically in the large
system limit for sensing matrices with independent and identically distributed
gaussian entries.
While our focus is on message passing algorithms for compressed sensing, the
analysis extends beyond this setting, to a general class of algorithms on dense
graphs. In this context, state evolution plays the role that density evolution
has for sparse graphs.
The proof technique is fundamentally different from the standard approach to
density evolution, in that it copes with large number of short loops in the
underlying factor graph. It relies instead on a conditioning technique recently
developed by Erwin Bolthausen in the context of spin glass theory.Comment: 41 page
Matrix Completion Problems for the Positiveness and Contraction Through Graphs
In this work, we study contractive and positive real matrix completion problems which are motivated in part by studies on sparce (or dense) matrices for weighted sparse recovery problems and rating matrices with rating density in recommender systems. Matrix completions problems also have many applications in probability and statistics, chemistry, numerical analysis (e.g. optimization), electrical engineering, and geophysics. In this paper we seek to connect the contractive and positive completion property to a graph theoretic property. We then answer whether the graphs of real symmetric matrices having loops at every vertex have the contractive completion property if and only if the graph of said matrix is chordal. If this is not true, we characterize all graphs of real symmetric matrices having the contractive completion property
- β¦