18 research outputs found
Computing a k-sparse n-length Discrete Fourier Transform using at most 4k samples and O(k log k) complexity
Given an -length input signal \mbf{x}, it is well known that its
Discrete Fourier Transform (DFT), \mbf{X}, can be computed in
complexity using a Fast Fourier Transform (FFT). If the spectrum \mbf{X} is
exactly -sparse (where ), can we do better? We show that
asymptotically in and , when is sub-linear in (precisely, where ), and the support of the non-zero DFT
coefficients is uniformly random, we can exploit this sparsity in two
fundamental ways (i) {\bf {sample complexity}}: we need only
deterministically chosen samples of the input signal \mbf{x} (where
when ); and (ii) {\bf {computational complexity}}: we can
reliably compute the DFT \mbf{X} using operations, where the
constants in the big Oh are small and are related to the constants involved in
computing a small number of DFTs of length approximately equal to the sparsity
parameter . Our algorithm succeeds with high probability, with the
probability of failure vanishing to zero asymptotically in the number of
samples acquired, .Comment: 36 pages, 15 figures. To be presented at ISIT-2013, Istanbul Turke
Applications of Coding Theory to Sub-Linear Time Sparse Recovery Problems
This dissertation leverages the connection between coding theory and classical sparse recovery problems like sparse Fourier and Hadamard transform computations to understand properties of existing recovery algorithms under various signal models, propose improvements, and adopt them to interesting applications in theoretical computer science like pattern matching.
In the first part of the dissertation, we begin by demonstrating the relationship between an extended Fast Fourier Aliasing-based Sparse Transform (FFAST) algorithm and the iterative hard decision decoding of product codes. We show that the FFAST algorithm is analogous to an iterative decoder for a carefully defined product code whose thresholds can be computed by an extension of Justensen's analysis to d-dimensional product codes. Interpreting the FFAST algorithm as decoding of a product code also provides insight into the performance of the FFAST algorithm when non-zero coefficients are not randomly chosen but are bursty such as what may be encountered in many practical applications like spectrum sensing. Recoverability results are guaranteed for the finite length case and we provided thresholds for the 1 and 2 burst cases asymptotically. It is further observed that the FFAST algorithm performs better for bursty signals in comparison to those for randomly chosen non-zero coefficients.
We then consider the problem of computing the Walsh-Hadamard Transform (WHT) of an N = 2^n dimensional signal whose WHT is K-sparse, when the sparsity parameter K scales sub-linearly in N. We propose improvements to the algorithm by Scheibler et al. by introducing a two error correcting code at each check node. Further, through density evolution analysis and simulations we show that the proposed modification substantially improves the space and time complexity of the algorithm, sometimes achieving as much as a 70% reduction.
We conclude by considering the substring/pattern matching problem of querying a string (or a database) of length N bits to determine all the locations where a substring (query) of length M appears either exactly or is within a Hamming distance of K from the query. We analyze the exact pattern matching problem where M consecutive symbols from x and is presented as a query, and the approximate pattern matching problem where we assume a noisy version of a substring. Our proposed algorithm is evaluated based on the sketching complexity, and the computational complexity in answering the query. Using a sparse Fourier transform computation based approach we show that all such matches can be determined with high probability in sub-linear time and space. Further, we present several extensions including optimization for longer query lengths, algorithmic improvements for correlated data sources, and a secured matching algorithm in an outsourced pattern matching setting
Deterministic Sparse Fourier Transform with an ?_{?} Guarantee
In this paper we revisit the deterministic version of the Sparse Fourier
Transform problem, which asks to read only a few entries of and design a recovery algorithm such that the output of the
algorithm approximates , the Discrete Fourier Transform (DFT) of .
The randomized case has been well-understood, while the main work in the
deterministic case is that of Merhi et al.\@ (J Fourier Anal Appl 2018), which
obtains samples and a similar runtime
with the guarantee. We focus on the stronger
guarantee and the closely related problem of incoherent
matrices. We list our contributions as follows.
1. We find a deterministic collection of samples for the
recovery in time , and a deterministic
collection of samples for the sparse
recovery in time .
2. We give new deterministic constructions of incoherent matrices that are
row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein's
inequality and bounds on exponential sums considered in analytic number theory.
Our first construction matches a previous randomized construction of Nelson,
Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of
the incoherent matrix.
Our algorithms are nearly sample-optimal, since a lower bound of is known, even for the case where the sensing matrix can be
arbitrarily designed. A similar lower bound of is
known for incoherent matrices.Comment: ICALP 2020--presentation improved according to reviewers' comment