18 research outputs found

    Computing a k-sparse n-length Discrete Fourier Transform using at most 4k samples and O(k log k) complexity

    Full text link
    Given an nn-length input signal \mbf{x}, it is well known that its Discrete Fourier Transform (DFT), \mbf{X}, can be computed in O(nlog⁑n)O(n \log n) complexity using a Fast Fourier Transform (FFT). If the spectrum \mbf{X} is exactly kk-sparse (where k<<nk<<n), can we do better? We show that asymptotically in kk and nn, when kk is sub-linear in nn (precisely, k∝nδk \propto n^{\delta} where 0<δ<10 < \delta <1), and the support of the non-zero DFT coefficients is uniformly random, we can exploit this sparsity in two fundamental ways (i) {\bf {sample complexity}}: we need only M=rkM=rk deterministically chosen samples of the input signal \mbf{x} (where r<4r < 4 when 0<δ<0.990 < \delta < 0.99); and (ii) {\bf {computational complexity}}: we can reliably compute the DFT \mbf{X} using O(klog⁑k)O(k \log k) operations, where the constants in the big Oh are small and are related to the constants involved in computing a small number of DFTs of length approximately equal to the sparsity parameter kk. Our algorithm succeeds with high probability, with the probability of failure vanishing to zero asymptotically in the number of samples acquired, MM.Comment: 36 pages, 15 figures. To be presented at ISIT-2013, Istanbul Turke

    Applications of Coding Theory to Sub-Linear Time Sparse Recovery Problems

    Get PDF
    This dissertation leverages the connection between coding theory and classical sparse recovery problems like sparse Fourier and Hadamard transform computations to understand properties of existing recovery algorithms under various signal models, propose improvements, and adopt them to interesting applications in theoretical computer science like pattern matching. In the first part of the dissertation, we begin by demonstrating the relationship between an extended Fast Fourier Aliasing-based Sparse Transform (FFAST) algorithm and the iterative hard decision decoding of product codes. We show that the FFAST algorithm is analogous to an iterative decoder for a carefully defined product code whose thresholds can be computed by an extension of Justensen's analysis to d-dimensional product codes. Interpreting the FFAST algorithm as decoding of a product code also provides insight into the performance of the FFAST algorithm when non-zero coefficients are not randomly chosen but are bursty such as what may be encountered in many practical applications like spectrum sensing. Recoverability results are guaranteed for the finite length case and we provided thresholds for the 1 and 2 burst cases asymptotically. It is further observed that the FFAST algorithm performs better for bursty signals in comparison to those for randomly chosen non-zero coefficients. We then consider the problem of computing the Walsh-Hadamard Transform (WHT) of an N = 2^n dimensional signal whose WHT is K-sparse, when the sparsity parameter K scales sub-linearly in N. We propose improvements to the algorithm by Scheibler et al. by introducing a two error correcting code at each check node. Further, through density evolution analysis and simulations we show that the proposed modification substantially improves the space and time complexity of the algorithm, sometimes achieving as much as a 70% reduction. We conclude by considering the substring/pattern matching problem of querying a string (or a database) of length N bits to determine all the locations where a substring (query) of length M appears either exactly or is within a Hamming distance of K from the query. We analyze the exact pattern matching problem where M consecutive symbols from x and is presented as a query, and the approximate pattern matching problem where we assume a noisy version of a substring. Our proposed algorithm is evaluated based on the sketching complexity, and the computational complexity in answering the query. Using a sparse Fourier transform computation based approach we show that all such matches can be determined with high probability in sub-linear time and space. Further, we present several extensions including optimization for longer query lengths, algorithmic improvements for correlated data sources, and a secured matching algorithm in an outsourced pattern matching setting

    Deterministic Sparse Fourier Transform with an ?_{?} Guarantee

    Get PDF
    In this paper we revisit the deterministic version of the Sparse Fourier Transform problem, which asks to read only a few entries of x∈Cnx \in \mathbb{C}^n and design a recovery algorithm such that the output of the algorithm approximates x^\hat x, the Discrete Fourier Transform (DFT) of xx. The randomized case has been well-understood, while the main work in the deterministic case is that of Merhi et al.\@ (J Fourier Anal Appl 2018), which obtains O(k2logβ‘βˆ’1kβ‹…log⁑5.5n)O(k^2 \log^{-1}k \cdot \log^{5.5}n) samples and a similar runtime with the β„“2/β„“1\ell_2/\ell_1 guarantee. We focus on the stronger β„“βˆž/β„“1\ell_{\infty}/\ell_1 guarantee and the closely related problem of incoherent matrices. We list our contributions as follows. 1. We find a deterministic collection of O(k2log⁑n)O(k^2 \log n) samples for the β„“βˆž/β„“1\ell_\infty/\ell_1 recovery in time O(nklog⁑2n)O(nk \log^2 n), and a deterministic collection of O(k2log⁑2n)O(k^2 \log^2 n) samples for the β„“βˆž/β„“1\ell_\infty/\ell_1 sparse recovery in time O(k2log⁑3n)O(k^2 \log^3n). 2. We give new deterministic constructions of incoherent matrices that are row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein's inequality and bounds on exponential sums considered in analytic number theory. Our first construction matches a previous randomized construction of Nelson, Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of the incoherent matrix. Our algorithms are nearly sample-optimal, since a lower bound of Ξ©(k2+klog⁑n)\Omega(k^2 + k \log n) is known, even for the case where the sensing matrix can be arbitrarily designed. A similar lower bound of Ξ©(k2log⁑n/log⁑k)\Omega(k^2 \log n/ \log k) is known for incoherent matrices.Comment: ICALP 2020--presentation improved according to reviewers' comment
    corecore