33,062 research outputs found
Traversing the FFT Computation Tree for Dimension-Independent Sparse Fourier Transforms
We consider the well-studied Sparse Fourier transform problem, where one aims
to quickly recover an approximately Fourier -sparse vector from observing its time domain representation . In the
exact -sparse case the best known dimension-independent algorithm runs in
near cubic time in and it is unclear whether a faster algorithm like in low
dimensions is possible. Beyond that, all known approaches either suffer from an
exponential dependence on the dimension or can only tolerate a trivial
amount of noise. This is in sharp contrast with the classical FFT of Cooley and
Tukey, which is stable and completely insensitive to the dimension of the input
vector: its runtime is in any dimension for . Our work
aims to address the above issues.
First, we provide a translation/reduction of the exactly -sparse FT
problem to a concrete tree exploration task which asks to recover leaves in
a full binary tree under certain exploration rules. Subsequently, we provide
(a) an almost quadratic in time algorithm for this task, and (b) evidence
that a strongly subquadratic time for Sparse FT via this approach is likely
impossible. We achieve the latter by proving a conditional quadratic time lower
bound on sparse polynomial multipoint evaluation (the classical non-equispaced
sparse FT) which is a core routine in the aforementioned translation. Thus, our
results combined can be viewed as an almost complete understanding of this
approach, which is the only known approach that yields sublinear time
dimension-independent Sparse FT algorithms.
Subsequently, we provide a robustification of our algorithm, yielding a
robust cubic time algorithm under bounded noise. This requires proving
new structural properties of the recently introduced adaptive aliasing filters
combined with a variety of new techniques and ideas
An Improved Lower Bound for Sparse Reconstruction from Subsampled Hadamard Matrices
We give a short argument that yields a new lower bound on the number of
subsampled rows from a bounded, orthonormal matrix necessary to form a matrix
with the restricted isometry property. We show that a matrix formed by
uniformly subsampling rows of an Hadamard matrix contains a
-sparse vector in the kernel, unless the number of subsampled rows is
--- our lower bound applies whenever . Containing a sparse vector in the kernel precludes not only
the restricted isometry property, but more generally the application of those
matrices for uniform sparse recovery.Comment: Improved exposition and added an autho
Structured random measurements in signal processing
Compressed sensing and its extensions have recently triggered interest in
randomized signal acquisition. A key finding is that random measurements
provide sparse signal reconstruction guarantees for efficient and stable
algorithms with a minimal number of samples. While this was first shown for
(unstructured) Gaussian random measurement matrices, applications require
certain structure of the measurements leading to structured random measurement
matrices. Near optimal recovery guarantees for such structured measurements
have been developed over the past years in a variety of contexts. This article
surveys the theory in three scenarios: compressed sensing (sparse recovery),
low rank matrix recovery, and phaseless estimation. The random measurement
matrices to be considered include random partial Fourier matrices, partial
random circulant matrices (subsampled convolutions), matrix completion, and
phase estimation from magnitudes of Fourier type measurements. The article
concludes with a brief discussion of the mathematical techniques for the
analysis of such structured random measurements.Comment: 22 pages, 2 figure
- …