8 research outputs found

    A Deterministic Sparse FFT for Functions with Structured Fourier Sparsity

    Full text link
    In this paper a deterministic sparse Fourier transform algorithm is presented which breaks the quadratic-in-sparsity runtime bottleneck for a large class of periodic functions exhibiting structured frequency support. These functions include, e.g., the oft-considered set of block frequency sparse functions of the form f(x)=βˆ‘j=1nβˆ‘k=0Bβˆ’1cΟ‰j+kei(Ο‰j+k)x,Β Β {Ο‰1,…,Ο‰n}βŠ‚(βˆ’βŒˆN2βŒ‰,⌊N2βŒ‹]∩Zf(x) = \sum^{n}_{j=1} \sum^{B-1}_{k=0} c_{\omega_j + k} e^{i(\omega_j + k)x},~~\{ \omega_1, \dots, \omega_n \} \subset \left(-\left\lceil \frac{N}{2}\right\rceil, \left\lfloor \frac{N}{2}\right\rfloor\right]\cap\mathbb{Z} as a simple subclass. Theoretical error bounds in combination with numerical experiments demonstrate that the newly proposed algorithms are both fast and robust to noise. In particular, they outperform standard sparse Fourier transforms in the rapid recovery of block frequency sparse functions of the type above.Comment: 39 pages, 5 figure

    A New Class of Fully Discrete Sparse Fourier Transforms: Faster Stable Implementations with Guarantees

    Full text link
    In this paper we consider Sparse Fourier Transform (SFT) algorithms for approximately computing the best ss-term approximation of the Discrete Fourier Transform (DFT) f^∈CN\mathbf{\hat{f}} \in \mathbb{C}^N of any given input vector f∈CN\mathbf{f} \in \mathbb{C}^N in just (slog⁑N)O(1)\left( s \log N\right)^{\mathcal{O}(1)}-time using only a similarly small number of entries of f\mathbf{f}. In particular, we present a deterministic SFT algorithm which is guaranteed to always recover a near best ss-term approximation of the DFT of any given input vector f∈CN\mathbf{f} \in \mathbb{C}^N in O(s2log⁑112(N))\mathcal{O} \left( s^2 \log ^{\frac{11}{2}} (N) \right)-time. Unlike previous deterministic results of this kind, our deterministic result holds for both arbitrary vectors f∈CN\mathbf{f} \in \mathbb{C}^N and vector lengths NN. In addition to these deterministic SFT results, we also develop several new publicly available randomized SFT implementations for approximately computing f^\mathbf{\hat{f}} from f\mathbf{f} using the same general techniques. The best of these new implementations is shown to outperform existing discrete sparse Fourier transform methods with respect to both runtime and noise robustness for large vector lengths NN

    Sparse Fast DCT for Vectors with One-block Support

    Full text link
    In this paper we present a new fast and deterministic algorithm for the inverse discrete cosine transform of type II that reconstructs the input vector x∈RN\mathbf{x}\in\mathbb{R}^{N}, N=2Jβˆ’1N=2^{J-1}, with short support of length mm from its discrete cosine transform xII^=CNIIx\mathbf{x}^{\widehat{\mathrm{II}}}=\mathbf{C}_N^{\mathrm{II}}\mathbf{x}. The resulting algorithm has a runtime of O(mlog⁑mlog⁑2Nm)\mathcal{O}\left(m\log m\log \frac{2N}{m}\right) and requires O(mlog⁑2Nm)\mathcal{O}\left(m\log \frac{2N}{m}\right) samples of xII^\mathbf{x}^{\widehat{\mathrm{II}}}. In order to derive this algorithm we also develop a new fast and deterministic inverse FFT algorithm that constructs the input vector y∈R2N\mathbf{y}\in\mathbb{R}^{2N} with reflected block support of block length mm from y^\widehat{\mathbf{y}} with the same runtime and sampling complexities as our DCT algorithm.Comment: 27 pages, 6 figure

    Sparse Harmonic Transforms: A New Class of Sublinear-time Algorithms for Learning Functions of Many Variables

    Full text link
    We develop fast and memory efficient numerical methods for learning functions of many variables that admit sparse representations in terms of general bounded orthonormal tensor product bases. Such functions appear in many applications including, e.g., various Uncertainty Quantification(UQ) problems involving the solution of parametric PDE that are approximately sparse in Chebyshev or Legendre product bases. We expect that our results provide a starting point for a new line of research on sublinear-time solution techniques for UQ applications of the type above which will eventually be able to scale to significantly higher-dimensional problems than what are currently computationally feasible. More concretely, let BB be a finite Bounded Orthonormal Product Basis (BOPB) of cardinality ∣B∣=N|B|=N. We will develop methods that approximate any function ff that is sparse in the BOPB, that is, f:DβŠ‚RDβ†’Cf:\mathcal{D}\subset R^D\rightarrow C of the form f(x)=βˆ‘b∈Scbβ‹…b(x)f(\mathbf{x})=\sum_{b\in S}c_b\cdot b(\mathbf{x}) with SβŠ‚BS\subset B of cardinality ∣S∣=sβ‰ͺN|S| =s\ll N. Our method has a runtime of just (slog⁑N)O(1)(s\log N)^{O(1)}, uses only (slog⁑N)O(1)(s\log N)^{O(1)} function evaluations on a fixed and nonadaptive grid, and not more than (slog⁑N)O(1)(s\log N)^{O(1)} bits of memory. For sβ‰ͺNs\ll N, the runtime (slog⁑N)O(1)(s\log N)^{O(1)} will be less than what is required to simply enumerate the elements of the basis BB; thus our method is the first approach applicable in a general BOPB framework that falls into the class referred to as "sublinear-time". This and the similarly reduced sample and memory requirements set our algorithm apart from previous works based on standard compressive sensing algorithms such as basis pursuit which typically store and utilize full intermediate basis representations of size Ξ©(N)\Omega(N)

    Inverting Spectrogram Measurements via Aliased Wigner Distribution Deconvolution and Angular Synchronization

    Full text link
    We propose a two-step approach for reconstructing a signal x∈Cd{\bf x}\in\mathbb{C}^d from subsampled short-time Fourier transform magnitude (spectogram) measurements: First, we use an aliased Wigner distribution deconvolution approach to solve for a portion of the rank-one matrix x^x^βˆ—.{\bf \widehat{{\bf x}}}{\bf \widehat{{\bf x}}}^{*}. Second, we use angular syncrhonization to solve for x^{\bf \widehat{{\bf x}}} (and then for x{\bf x} by Fourier inversion). Using this method, we produce two new efficient phase retrieval algorithms that perform well numerically in comparison to standard approaches and also prove two theorems, one which guarantees the recovery of discrete, bandlimited signals x∈Cd{\bf x}\in\mathbb{C}^{d} from fewer than dd STFT magnitude measurements and another which establishes a new class of deterministic coded diffraction pattern measurements which are guaranteed to allow efficient and noise robust recovery

    (Nearly) Sample-Optimal Sparse Fourier Transform in Any Dimension; RIPless and Filterless

    Full text link
    In this paper, we consider the extensively studied problem of computing a kk-sparse approximation to the dd-dimensional Fourier transform of a length nn signal. Our algorithm uses O(klog⁑klog⁑n)O(k \log k \log n) samples, is dimension-free, operates for any universe size, and achieves the strongest β„“βˆž/β„“2\ell_\infty/\ell_2 guarantee, while running in a time comparable to the Fast Fourier Transform. In contrast to previous algorithms which proceed either via the Restricted Isometry Property or via filter functions, our approach offers a fresh perspective to the sparse Fourier Transform problem

    Lower Memory Oblivious (Tensor) Subspace Embeddings with Fewer Random Bits: Modewise Methods for Least Squares

    Full text link
    In this paper new general modewise Johnson-Lindenstrauss (JL) subspace embeddings are proposed that are both considerably faster to generate and easier to store than traditional JL embeddings when working with extremely large vectors and/or tensors. Corresponding embedding results are then proven for two different types of low-dimensional (tensor) subspaces. The first of these new subspace embedding results produces improved space complexity bounds for embeddings of rank-rr tensors whose CP decompositions are contained in the span of a fixed (but unknown) set of rr rank-one basis tensors. In the traditional vector setting this first result yields new and very general near-optimal oblivious subspace embedding constructions that require fewer random bits to generate than standard JL embeddings when embedding subspaces of CN\mathbb{C}^N spanned by basis vectors with special Kronecker structure. The second result proven herein provides new fast JL embeddings of arbitrary rr-dimensional subspaces SβŠ‚CN\mathcal{S} \subset \mathbb{C}^N which also require fewer random bits (and so are easier to store - i.e., require less space) than standard fast JL embedding methods in order to achieve small Ο΅\epsilon-distortions. These new oblivious subspace embedding results work by (i)(i) effectively folding any given vector in S\mathcal{S} into a (not necessarily low-rank) tensor, and then (ii)(ii) embedding the resulting tensor into Cm\mathbb{C}^m for m≀Crlog⁑c(N)/Ο΅2m \leq C r \log^c(N) / \epsilon^2. Applications related to compression and fast compressed least squares solution methods are also considered, including those used for fitting low-rank CP decompositions, and the proposed JL embedding results are shown to work well numerically in both settings

    Sparse Harmonic Transforms II: Best ss-Term Approximation Guarantees for Bounded Orthonormal Product Bases in Sublinear-Time

    Full text link
    In this paper, we develop a sublinear-time compressive sensing algorithm for approximating functions of many variables which are compressible in a given Bounded Orthonormal Product Basis (BOPB). The resulting algorithm is shown to both have an associated best ss-term recovery guarantee in the given BOPB, and also to work well numerically for solving sparse approximation problems involving functions contained in the span of fairly general sets of as many as ∼10230\sim10^{230} orthonormal basis functions. All code is made publicly available. As part of the proof of the main recovery guarantee new variants of the well known CoSaMP algorithm are proposed which can utilize any sufficiently accurate support identification procedure satisfying a {Support Identification Property (SIP)} in order to obtain strong sparse approximation guarantees. These new CoSaMP variants are then proven to have both runtime and recovery error behavior which are largely determined by the associated runtime and error behavior of the chosen support identification method. The main theoretical results of the paper are then shown by developing a sublinear-time support identification algorithm for general BOPB sets which is robust to arbitrary additive errors. Using this new support identification method to create a new CoSaMP variant then results in a new robust sublinear-time compressive sensing algorithm for BOPB-compressible functions of many variables
    corecore