6 research outputs found
On the Power of Adaptivity in Sparse Recovery
The goal of (stable) sparse recovery is to recover a -sparse approximation
of a vector from linear measurements of . Specifically, the goal is
to recover such that ||x-x*||_p <= C min_{k-sparse x'} ||x-x'||_q for some
constant and norm parameters and . It is known that, for or
, this task can be accomplished using non-adaptive
measurements [CRT06] and that this bound is tight [DIPW10,FPRU10,PW11].
In this paper we show that if one is allowed to perform measurements that are
adaptive, then the number of measurements can be considerably reduced.
Specifically, for and we show - A scheme with measurements that uses
rounds. This is a significant improvement over the best possible non-adaptive
bound. - A scheme with measurements
that uses /two/ rounds. This improves over the best possible non-adaptive
bound. To the best of our knowledge, these are the first results of this type.
As an independent application, we show how to solve the problem of finding a
duplicate in a data stream of items drawn from using
bits of space and passes, improving over the best
possible space complexity achievable using a single pass.Comment: 18 pages; appearing at FOCS 201
Task-Driven Adaptive Statistical Compressive Sensing of Gaussian Mixture Models
A framework for adaptive and non-adaptive statistical compressive sensing is
developed, where a statistical model replaces the standard sparsity model of
classical compressive sensing. We propose within this framework optimal
task-specific sensing protocols specifically and jointly designed for
classification and reconstruction. A two-step adaptive sensing paradigm is
developed, where online sensing is applied to detect the signal class in the
first step, followed by a reconstruction step adapted to the detected class and
the observed samples. The approach is based on information theory, here
tailored for Gaussian mixture models (GMMs), where an information-theoretic
objective relationship between the sensed signals and a representation of the
specific task of interest is maximized. Experimental results using synthetic
signals, Landsat satellite attributes, and natural images of different sizes
and with different noise levels show the improvements achieved using the
proposed framework when compared to more standard sensing protocols. The
underlying formulation can be applied beyond GMMs, at the price of higher
mathematical and computational complexity
Sparse recovery and Fourier sampling
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 155-160).In the last decade a broad literature has arisen studying sparse recovery, the estimation of sparse vectors from low dimensional linear projections. Sparse recovery has a wide variety of applications such as streaming algorithms, image acquisition, and disease testing. A particularly important subclass of sparse recovery is the sparse Fourier transform, which considers the computation of a discrete Fourier transform when the output is sparse. Applications of the sparse Fourier transform include medical imaging, spectrum sensing, and purely computation tasks involving convolution. This thesis describes a coherent set of techniques that achieve optimal or near-optimal upper and lower bounds for a variety of sparse recovery problems. We give the following state-of-the-art algorithms for recovery of an approximately k-sparse vector in n dimensions: -- Two sparse Fourier transform algorithms, respectively taking ... time and ... samples. The latter is within log e log n of the optimal sample complexity when ... -- An algorithm for adaptive sparse recovery using ... measurements, showing that adaptivity can give substantial improvements when k is small. -- An algorithm for C-approximate sparse recovery with ... measurements, which matches our lower bound up to the log* k factor and gives the first improvement for ... In the second part of this thesis, we give lower bounds for the above problems and more.by Eric Price.Ph. D