4,763 research outputs found
Sequential Compressed Sensing
Compressed sensing allows perfect recovery of sparse signals (or signals
sparse in some basis) using only a small number of random measurements.
Existing results in compressed sensing literature have focused on
characterizing the achievable performance by bounding the number of samples
required for a given level of signal sparsity. However, using these bounds to
minimize the number of samples requires a-priori knowledge of the sparsity of
the unknown signal, or the decay structure for near-sparse signals.
Furthermore, there are some popular recovery methods for which no such bounds
are known.
In this paper, we investigate an alternative scenario where observations are
available in sequence. For any recovery method, this means that there is now a
sequence of candidate reconstructions. We propose a method to estimate the
reconstruction error directly from the samples themselves, for every candidate
in this sequence. This estimate is universal in the sense that it is based only
on the measurement ensemble, and not on the recovery method or any assumed
level of sparsity of the unknown signal. With these estimates, one can now stop
observations as soon as there is reasonable certainty of either exact or
sufficiently accurate reconstruction. They also provide a way to obtain
"run-time" guarantees for recovery methods that otherwise lack a-priori
performance bounds.
We investigate both continuous (e.g. Gaussian) and discrete (e.g. Bernoulli)
random measurement ensembles, both for exactly sparse and general near-sparse
signals, and with both noisy and noiseless measurements.Comment: to appear in IEEE transactions on Special Topics in Signal Processin
Information Theoretic Limits for Standard and One-Bit Compressed Sensing with Graph-Structured Sparsity
In this paper, we analyze the information theoretic lower bound on the
necessary number of samples needed for recovering a sparse signal under
different compressed sensing settings. We focus on the weighted graph model, a
model-based framework proposed by Hegde et al. (2015), for standard compressed
sensing as well as for one-bit compressed sensing. We study both the noisy and
noiseless regimes. Our analysis is general in the sense that it applies to any
algorithm used to recover the signal. We carefully construct restricted
ensembles for different settings and then apply Fano's inequality to establish
the lower bound on the necessary number of samples. Furthermore, we show that
our bound is tight for one-bit compressed sensing, while for standard
compressed sensing, our bound is tight up to a logarithmic factor of the number
of non-zero entries in the signal
Info-Greedy sequential adaptive compressed sensing
We present an information-theoretic framework for sequential adaptive
compressed sensing, Info-Greedy Sensing, where measurements are chosen to
maximize the extracted information conditioned on the previous measurements. We
show that the widely used bisection approach is Info-Greedy for a family of
-sparse signals by connecting compressed sensing and blackbox complexity of
sequential query algorithms, and present Info-Greedy algorithms for Gaussian
and Gaussian Mixture Model (GMM) signals, as well as ways to design sparse
Info-Greedy measurements. Numerical examples demonstrate the good performance
of the proposed algorithms using simulated and real data: Info-Greedy Sensing
shows significant improvement over random projection for signals with sparse
and low-rank covariance matrices, and adaptivity brings robustness when there
is a mismatch between the assumed and the true distributions.Comment: Preliminary results presented at Allerton Conference 2014. To appear
in IEEE Journal Selected Topics on Signal Processin
Improved Bounds for Universal One-Bit Compressive Sensing
Unlike compressive sensing where the measurement outputs are assumed to be
real-valued and have infinite precision, in "one-bit compressive sensing",
measurements are quantized to one bit, their signs. In this work, we show how
to recover the support of sparse high-dimensional vectors in the one-bit
compressive sensing framework with an asymptotically near-optimal number of
measurements. We also improve the bounds on the number of measurements for
approximately recovering vectors from one-bit compressive sensing measurements.
Our results are universal, namely the same measurement scheme works
simultaneously for all sparse vectors.
Our proof of optimality for support recovery is obtained by showing an
equivalence between the task of support recovery using 1-bit compressive
sensing and a well-studied combinatorial object known as Union Free Families.Comment: 14 page
Stable image reconstruction using total variation minimization
This article presents near-optimal guarantees for accurate and robust image
recovery from under-sampled noisy measurements using total variation
minimization. In particular, we show that from O(slog(N)) nonadaptive linear
measurements, an image can be reconstructed to within the best s-term
approximation of its gradient up to a logarithmic factor, and this factor can
be removed by taking slightly more measurements. Along the way, we prove a
strengthened Sobolev inequality for functions lying in the null space of
suitably incoherent matrices.Comment: 25 page
- …