485 research outputs found

    An Adaptive Sublinear-Time Block Sparse Fourier Transform

    Get PDF
    The problem of approximately computing the kk dominant Fourier coefficients of a vector XX quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with O(klognlog(n/k))O(k\log n\log (n/k)) runtime [Hassanieh \emph{et al.}, STOC'12] and O(klogn)O(k\log n) sample complexity [Indyk \emph{et al.}, FOCS'14]. These results are proved using non-adaptive algorithms, and the latter O(klogn)O(k\log n) sample complexity result is essentially the best possible under the sparsity assumption alone: It is known that even adaptive algorithms must use Ω((klog(n/k))/loglogn)\Omega((k\log(n/k))/\log\log n) samples [Hassanieh \emph{et al.}, STOC'12]. By {\em adaptive}, we mean being able to exploit previous samples in guiding the selection of further samples. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a (k0,k1)(k_0,k_1)-block sparse model. In this model, signal frequencies are clustered in k0k_0 intervals with width k1k_1 in Fourier space, and k=k0k1k= k_0k_1 is the total sparsity. Signals arising in applications are often well approximated by this model with k0kk_0\ll k. Our main result is the first sparse FFT algorithm for (k0,k1)(k_0, k_1)-block sparse signals with a sample complexity of O(k0k1+k0log(1+k0)logn)O^*(k_0k_1 + k_0\log(1+ k_0)\log n) at constant signal-to-noise ratios, and sublinear runtime. A similar sample complexity was previously achieved in the works on {\em model-based compressive sensing} using random Gaussian measurements, but used Ω(n)\Omega(n) runtime. To the best of our knowledge, our result is the first sublinear-time algorithm for model based compressed sensing, and the first sparse FFT result that goes below the O(klogn)O(k\log n) sample complexity bound. Interestingly, the aforementioned model-based compressive sensing result that relies on Gaussian measurements is non-adaptive, whereas our algorithm crucially uses {\em adaptivity} to achieve the improved sample complexity bound. We prove that adaptivity is in fact necessary in the Fourier setting: Any {\em non-adaptive} algorithm must use Ω(k0k1lognk0k1)\Omega(k_0k_1\log \frac{n}{k_0k_1}) samples for the (k0,k1(k_0,k_1)-block sparse model, ruling out improvements over the vanilla sparsity assumption. Our main technical innovation for adaptivity is a new randomized energy-based importance sampling technique that may be of independent interest

    Deterministic Sparse Fourier Transform with an ?_{?} Guarantee

    Get PDF
    In this paper we revisit the deterministic version of the Sparse Fourier Transform problem, which asks to read only a few entries of xCnx \in \mathbb{C}^n and design a recovery algorithm such that the output of the algorithm approximates x^\hat x, the Discrete Fourier Transform (DFT) of xx. The randomized case has been well-understood, while the main work in the deterministic case is that of Merhi et al.\@ (J Fourier Anal Appl 2018), which obtains O(k2log1klog5.5n)O(k^2 \log^{-1}k \cdot \log^{5.5}n) samples and a similar runtime with the 2/1\ell_2/\ell_1 guarantee. We focus on the stronger /1\ell_{\infty}/\ell_1 guarantee and the closely related problem of incoherent matrices. We list our contributions as follows. 1. We find a deterministic collection of O(k2logn)O(k^2 \log n) samples for the /1\ell_\infty/\ell_1 recovery in time O(nklog2n)O(nk \log^2 n), and a deterministic collection of O(k2log2n)O(k^2 \log^2 n) samples for the /1\ell_\infty/\ell_1 sparse recovery in time O(k2log3n)O(k^2 \log^3n). 2. We give new deterministic constructions of incoherent matrices that are row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein's inequality and bounds on exponential sums considered in analytic number theory. Our first construction matches a previous randomized construction of Nelson, Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of the incoherent matrix. Our algorithms are nearly sample-optimal, since a lower bound of Ω(k2+klogn)\Omega(k^2 + k \log n) is known, even for the case where the sensing matrix can be arbitrarily designed. A similar lower bound of Ω(k2logn/logk)\Omega(k^2 \log n/ \log k) is known for incoherent matrices.Comment: ICALP 2020--presentation improved according to reviewers' comment
    corecore