177 research outputs found
Structured random measurements in signal processing
Compressed sensing and its extensions have recently triggered interest in
randomized signal acquisition. A key finding is that random measurements
provide sparse signal reconstruction guarantees for efficient and stable
algorithms with a minimal number of samples. While this was first shown for
(unstructured) Gaussian random measurement matrices, applications require
certain structure of the measurements leading to structured random measurement
matrices. Near optimal recovery guarantees for such structured measurements
have been developed over the past years in a variety of contexts. This article
surveys the theory in three scenarios: compressed sensing (sparse recovery),
low rank matrix recovery, and phaseless estimation. The random measurement
matrices to be considered include random partial Fourier matrices, partial
random circulant matrices (subsampled convolutions), matrix completion, and
phase estimation from magnitudes of Fourier type measurements. The article
concludes with a brief discussion of the mathematical techniques for the
analysis of such structured random measurements.Comment: 22 pages, 2 figure
Quantized Compressed Sensing for Partial Random Circulant Matrices
We provide the first analysis of a non-trivial quantization scheme for
compressed sensing measurements arising from structured measurements.
Specifically, our analysis studies compressed sensing matrices consisting of
rows selected at random, without replacement, from a circulant matrix generated
by a random subgaussian vector. We quantize the measurements using stable,
possibly one-bit, Sigma-Delta schemes, and use a reconstruction method based on
convex optimization. We show that the part of the reconstruction error due to
quantization decays polynomially in the number of measurements. This is in line
with analogous results on Sigma-Delta quantization associated with random
Gaussian or subgaussian matrices, and significantly better than results
associated with the widely assumed memoryless scalar quantization. Moreover, we
prove that our approach is stable and robust; i.e., the reconstruction error
degrades gracefully in the presence of non-quantization noise and when the
underlying signal is not strictly sparse. The analysis relies on results
concerning subgaussian chaos processes as well as a variation of McDiarmid's
inequality.Comment: 15 page
A probabilistic and RIPless theory of compressed sensing
This paper introduces a simple and very general theory of compressive
sensing. In this theory, the sensing mechanism simply selects sensing vectors
independently at random from a probability distribution F; it includes all
models - e.g. Gaussian, frequency measurements - discussed in the literature,
but also provides a framework for new measurement strategies as well. We prove
that if the probability distribution F obeys a simple incoherence property and
an isotropy property, one can faithfully recover approximately sparse signals
from a minimal number of noisy measurements. The novelty is that our recovery
results do not require the restricted isometry property (RIP) - they make use
of a much weaker notion - or a random model for the signal. As an example, the
paper shows that a signal with s nonzero entries can be faithfully recovered
from about s log n Fourier coefficients that are contaminated with noise.Comment: 36 page
Compressed Sensing and Parallel Acquisition
Parallel acquisition systems arise in various applications in order to
moderate problems caused by insufficient measurements in single-sensor systems.
These systems allow simultaneous data acquisition in multiple sensors, thus
alleviating such problems by providing more overall measurements. In this work
we consider the combination of compressed sensing with parallel acquisition. We
establish the theoretical improvements of such systems by providing recovery
guarantees for which, subject to appropriate conditions, the number of
measurements required per sensor decreases linearly with the total number of
sensors. Throughout, we consider two different sampling scenarios -- distinct
(corresponding to independent sampling in each sensor) and identical
(corresponding to dependent sampling between sensors) -- and a general
mathematical framework that allows for a wide range of sensing matrices (e.g.,
subgaussian random matrices, subsampled isometries, random convolutions and
random Toeplitz matrices). We also consider not just the standard sparse signal
model, but also the so-called sparse in levels signal model. This model
includes both sparse and distributed signals and clustered sparse signals. As
our results show, optimal recovery guarantees for both distinct and identical
sampling are possible under much broader conditions on the so-called sensor
profile matrices (which characterize environmental conditions between a source
and the sensors) for the sparse in levels model than for the sparse model. To
verify our recovery guarantees we provide numerical results showing phase
transitions for a number of different multi-sensor environments.Comment: 43 pages, 4 figure
- …