881 research outputs found
Compressed Sensing Using Binary Matrices of Nearly Optimal Dimensions
In this paper, we study the problem of compressed sensing using binary
measurement matrices and -norm minimization (basis pursuit) as the
recovery algorithm. We derive new upper and lower bounds on the number of
measurements to achieve robust sparse recovery with binary matrices. We
establish sufficient conditions for a column-regular binary matrix to satisfy
the robust null space property (RNSP) and show that the associated sufficient
conditions % sparsity bounds for robust sparse recovery obtained using the RNSP
are better by a factor of compared to the
sufficient conditions obtained using the restricted isometry property (RIP).
Next we derive universal \textit{lower} bounds on the number of measurements
that any binary matrix needs to have in order to satisfy the weaker sufficient
condition based on the RNSP and show that bipartite graphs of girth six are
optimal. Then we display two classes of binary matrices, namely parity check
matrices of array codes and Euler squares, which have girth six and are nearly
optimal in the sense of almost satisfying the lower bound. In principle,
randomly generated Gaussian measurement matrices are "order-optimal". So we
compare the phase transition behavior of the basis pursuit formulation using
binary array codes and Gaussian matrices and show that (i) there is essentially
no difference between the phase transition boundaries in the two cases and (ii)
the CPU time of basis pursuit with binary matrices is hundreds of times faster
than with Gaussian matrices and the storage requirements are less. Therefore it
is suggested that binary matrices are a viable alternative to Gaussian matrices
for compressed sensing using basis pursuit. \end{abstract}Comment: 28 pages, 3 figures, 5 table
Efficient and Robust Compressed Sensing Using Optimized Expander Graphs
Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any n-dimensional vector that is k-sparse can be fully recovered using O(klog n) measurements and only O(klog n) simple recovery iterations. In this paper, we improve upon this result by considering expander graphs with expansion coefficient beyond 3/4 and show that, with the same number of measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be accomplished by at most 2k very simple iterations. The number of iterations can be reduced arbitrarily close to k, and the recovery algorithm can be implemented very efficiently using a simple priority queue with total recovery time O(nlog(n/k))). We also show that by tolerating a small penal- ty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expander-graph-based methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the time complexity and the simplicity of recovery. Finally, we will show how our analysis extends to give a robust algorithm that finds the position and sign of the k significant elements of an almost k-sparse signal and then, using very simple optimization techniques, finds a k-sparse signal which is close to the best k-term approximation of the original signal
Sparse Recovery of Positive Signals with Minimal Expansion
We investigate the sparse recovery problem of reconstructing a
high-dimensional non-negative sparse vector from lower dimensional linear
measurements. While much work has focused on dense measurement matrices, sparse
measurement schemes are crucial in applications, such as DNA microarrays and
sensor networks, where dense measurements are not practically feasible. One
possible construction uses the adjacency matrices of expander graphs, which
often leads to recovery algorithms much more efficient than
minimization. However, to date, constructions based on expanders have required
very high expansion coefficients which can potentially make the construction of
such graphs difficult and the size of the recoverable sets small.
In this paper, we construct sparse measurement matrices for the recovery of
non-negative vectors, using perturbations of the adjacency matrix of an
expander graph with much smaller expansion coefficient. We present a necessary
and sufficient condition for optimization to successfully recover the
unknown vector and obtain expressions for the recovery threshold. For certain
classes of measurement matrices, this necessary and sufficient condition is
further equivalent to the existence of a "unique" vector in the constraint set,
which opens the door to alternative algorithms to minimization. We
further show that the minimal expansion we use is necessary for any graph for
which sparse recovery is possible and that therefore our construction is tight.
We finally present a novel recovery algorithm that exploits expansion and is
much faster than optimization. Finally, we demonstrate through
theoretical bounds, as well as simulation, that our method is robust to noise
and approximate sparsity.Comment: 25 pages, submitted for publicatio
Further Results on Performance Analysis for Compressive Sensing Using Expander Graphs
Compressive sensing is an emerging technology which can recover a sparse signal vector of dimension n via a much smaller number of measurements than n. In this paper, we will give further results on the performance bounds of compressive sensing. We consider the newly proposed expander graph based compressive sensing schemes and show that, similar to the l_1 minimization case, we can exactly recover any k-sparse signal using only O(k log(n)) measurements, where k is the number of nonzero elements. The number of computational iterations is of order O(k log(n)), while each iteration involves very simple computational steps
Measurement Bounds for Sparse Signal Ensembles via Graphical Models
In compressive sensing, a small collection of linear projections of a sparse
signal contains enough information to permit signal recovery. Distributed
compressive sensing (DCS) extends this framework by defining ensemble sparsity
models, allowing a correlated ensemble of sparse signals to be jointly
recovered from a collection of separately acquired compressive measurements. In
this paper, we introduce a framework for modeling sparse signal ensembles that
quantifies the intra- and inter-signal dependencies within and among the
signals. This framework is based on a novel bipartite graph representation that
links the sparse signal coefficients with the measurements obtained for each
signal. Using our framework, we provide fundamental bounds on the number of
noiseless measurements that each sensor must collect to ensure that the signals
are jointly recoverable.Comment: 11 pages, 2 figure
Efficient Compressive Sensing with Deterministic Guarantees Using Expander Graphs
Compressive sensing is an emerging technology which can recover a sparse signal vector of dimension n via a much smaller number of measurements than n. However, the existing compressive sensing methods may still suffer from relatively high recovery complexity, such as O(n^3), or can only work efficiently when the signal is super sparse, sometimes without deterministic performance guarantees. In this paper, we propose a compressive sensing scheme with deterministic performance guarantees using expander-graphs-based measurement matrices and show that the signal recovery can be achieved with complexity O(n) even if the number of nonzero elements k grows linearly with n. We also investigate compressive sensing for approximately sparse signals using this new method. Moreover, explicit constructions of the considered expander graphs exist. Simulation results are given to show the performance and complexity of the new method
Vanishingly Sparse Matrices and Expander Graphs, With Application to Compressed Sensing
We revisit the probabilistic construction of sparse random matrices where
each column has a fixed number of nonzeros whose row indices are drawn
uniformly at random with replacement. These matrices have a one-to-one
correspondence with the adjacency matrices of fixed left degree expander
graphs. We present formulae for the expected cardinality of the set of
neighbors for these graphs, and present tail bounds on the probability that
this cardinality will be less than the expected value. Deducible from these
bounds are similar bounds for the expansion of the graph which is of interest
in many applications. These bounds are derived through a more detailed analysis
of collisions in unions of sets. Key to this analysis is a novel {\em dyadic
splitting} technique. The analysis led to the derivation of better order
constants that allow for quantitative theorems on existence of lossless
expander graphs and hence the sparse random matrices we consider and also
quantitative compressed sensing sampling theorems when using sparse non
mean-zero measurement matrices.Comment: 17 pages, 12 Postscript figure
- …