454 research outputs found
Efficient and Robust Compressed Sensing using High-Quality Expander Graphs
Expander graphs have been recently proposed to construct efficient compressed
sensing algorithms. In particular, it has been shown that any -dimensional
vector that is -sparse (with ) can be fully recovered using
measurements and only simple recovery
iterations. In this paper we improve upon this result by considering expander
graphs with expansion coefficient beyond 3/4 and show that, with the same
number of measurements, only recovery iterations are required, which is
a significant improvement when is large. In fact, full recovery can be
accomplished by at most very simple iterations. The number of iterations
can be made arbitrarily close to , and the recovery algorithm can be
implemented very efficiently using a simple binary search tree. We also show
that by tolerating a small penalty on the number of measurements, and not on
the number of recovery iterations, one can use the efficient construction of a
family of expander graphs to come up with explicit measurement matrices for
this method. We compare our result with other recently developed
expander-graph-based methods and argue that it compares favorably both in terms
of the number of required measurements and in terms of the recovery time
complexity. Finally we will show how our analysis extends to give a robust
algorithm that finds the position and sign of the significant elements of
an almost -sparse signal and then, using very simple optimization
techniques, finds in sublinear time a -sparse signal which approximates the
original signal with very high precision
Performance bounds for expander-based compressed sensing in Poisson noise
This paper provides performance bounds for compressed sensing in the presence
of Poisson noise using expander graphs. The Poisson noise model is appropriate
for a variety of applications, including low-light imaging and digital
streaming, where the signal-independent and/or bounded noise models used in the
compressed sensing literature are no longer applicable. In this paper, we
develop a novel sensing paradigm based on expander graphs and propose a MAP
algorithm for recovering sparse or compressible signals from Poisson
observations. The geometry of the expander graphs and the positivity of the
corresponding sensing matrices play a crucial role in establishing the bounds
on the signal reconstruction error of the proposed algorithm. We support our
results with experimental demonstrations of reconstructing average packet
arrival rates and instantaneous packet counts at a router in a communication
network, where the arrivals of packets in each flow follow a Poisson process.Comment: revised version; accepted to IEEE Transactions on Signal Processin
A robust parallel algorithm for combinatorial compressed sensing
In previous work two of the authors have shown that a vector with at most nonzeros can be recovered from an expander
sketch in operations via the
Parallel- decoding algorithm, where denotes the
number of nonzero entries in . In this paper we
present the Robust- decoding algorithm, which robustifies
Parallel- when the sketch is corrupted by additive noise. This
robustness is achieved by approximating the asymptotic posterior distribution
of values in the sketch given its corrupted measurements. We provide analytic
expressions that approximate these posteriors under the assumptions that the
nonzero entries in the signal and the noise are drawn from continuous
distributions. Numerical experiments presented show that Robust- is
superior to existing greedy and combinatorial compressed sensing algorithms in
the presence of small to moderate signal-to-noise ratios in the setting of
Gaussian signals and Gaussian additive noise
Expander -Decoding
We introduce two new algorithms, Serial- and Parallel- for
solving a large underdetermined linear system of equations when it is known that has at most
nonzero entries and that is the adjacency matrix of an unbalanced left
-regular expander graph. The matrices in this class are sparse and allow a
highly efficient implementation. A number of algorithms have been designed to
work exclusively under this setting, composing the branch of combinatorial
compressed-sensing (CCS).
Serial- and Parallel- iteratively minimise by successfully combining two desirable features of previous CCS
algorithms: the information-preserving strategy of ER, and the parallel
updating mechanism of SMP. We are able to link these elements and guarantee
convergence in operations by assuming that the signal
is dissociated, meaning that all of the subset sums of the support of
are pairwise different. However, we observe empirically that the signal need
not be exactly dissociated in practice. Moreover, we observe Serial-
and Parallel- to be able to solve large scale problems with a larger
fraction of nonzeros than other algorithms when the number of measurements is
substantially less than the signal length; in particular, they are able to
reliably solve for a -sparse vector from expander
measurements with and up to four times greater than what is
achievable by -regularization from dense Gaussian measurements.
Additionally, Serial- and Parallel- are observed to be able to
solve large problems sizes in substantially less time than other algorithms for
compressed sensing. In particular, Parallel- is structured to take
advantage of massively parallel architectures.Comment: 14 pages, 10 figure
Sparse Recovery of Positive Signals with Minimal Expansion
We investigate the sparse recovery problem of reconstructing a
high-dimensional non-negative sparse vector from lower dimensional linear
measurements. While much work has focused on dense measurement matrices, sparse
measurement schemes are crucial in applications, such as DNA microarrays and
sensor networks, where dense measurements are not practically feasible. One
possible construction uses the adjacency matrices of expander graphs, which
often leads to recovery algorithms much more efficient than
minimization. However, to date, constructions based on expanders have required
very high expansion coefficients which can potentially make the construction of
such graphs difficult and the size of the recoverable sets small.
In this paper, we construct sparse measurement matrices for the recovery of
non-negative vectors, using perturbations of the adjacency matrix of an
expander graph with much smaller expansion coefficient. We present a necessary
and sufficient condition for optimization to successfully recover the
unknown vector and obtain expressions for the recovery threshold. For certain
classes of measurement matrices, this necessary and sufficient condition is
further equivalent to the existence of a "unique" vector in the constraint set,
which opens the door to alternative algorithms to minimization. We
further show that the minimal expansion we use is necessary for any graph for
which sparse recovery is possible and that therefore our construction is tight.
We finally present a novel recovery algorithm that exploits expansion and is
much faster than optimization. Finally, we demonstrate through
theoretical bounds, as well as simulation, that our method is robust to noise
and approximate sparsity.Comment: 25 pages, submitted for publicatio
Vanishingly Sparse Matrices and Expander Graphs, With Application to Compressed Sensing
We revisit the probabilistic construction of sparse random matrices where
each column has a fixed number of nonzeros whose row indices are drawn
uniformly at random with replacement. These matrices have a one-to-one
correspondence with the adjacency matrices of fixed left degree expander
graphs. We present formulae for the expected cardinality of the set of
neighbors for these graphs, and present tail bounds on the probability that
this cardinality will be less than the expected value. Deducible from these
bounds are similar bounds for the expansion of the graph which is of interest
in many applications. These bounds are derived through a more detailed analysis
of collisions in unions of sets. Key to this analysis is a novel {\em dyadic
splitting} technique. The analysis led to the derivation of better order
constants that allow for quantitative theorems on existence of lossless
expander graphs and hence the sparse random matrices we consider and also
quantitative compressed sensing sampling theorems when using sparse non
mean-zero measurement matrices.Comment: 17 pages, 12 Postscript figure
- β¦