763 research outputs found

    CoSaMP: Iterative signal recovery from incomplete and inaccurate samples

    Get PDF
    Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix-vector multiplies with the sampling matrix. For many cases of interest, the running time is just O(N*log^2(N)), where N is the length of the signal.Comment: 30 pages. Revised. Presented at Information Theory and Applications, 31 January 2008, San Dieg

    Sublinear-Time Algorithms for Compressive Phase Retrieval

    Full text link
    In the compressive phase retrieval problem, or phaseless compressed sensing, or compressed sensing from intensity only measurements, the goal is to reconstruct a sparse or approximately kk-sparse vector x∈Rnx \in \mathbb{R}^n given access to y=∣Φx∣y= |\Phi x|, where ∣v∣|v| denotes the vector obtained from taking the absolute value of v∈Rnv\in\mathbb{R}^n coordinate-wise. In this paper we present sublinear-time algorithms for different variants of the compressive phase retrieval problem which are akin to the variants considered for the classical compressive sensing problem in theoretical computer science. Our algorithms use pure combinatorial techniques and near-optimal number of measurements.Comment: The ell_2/ell_2 algorithm was substituted by a modification of the ell_infty/ell_2 algorithm which strictly subsumes i

    Deterministic Sampling of Sparse Trigonometric Polynomials

    Get PDF
    One can recover sparse multivariate trigonometric polynomials from few randomly taken samples with high probability (as shown by Kunis and Rauhut). We give a deterministic sampling of multivariate trigonometric polynomials inspired by Weil's exponential sum. Our sampling can produce a deterministic matrix satisfying the statistical restricted isometry property, and also nearly optimal Grassmannian frames. We show that one can exactly reconstruct every MM-sparse multivariate trigonometric polynomial with fixed degree and of length DD from the determinant sampling XX, using the orthogonal matching pursuit, and # X is a prime number greater than (Mlog⁑D)2(M\log D)^2. This result is almost optimal within the (log⁑D)2(\log D)^2 factor. The simulations show that the deterministic sampling can offer reconstruction performance similar to the random sampling.Comment: 9 page

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Deterministic Sparse Fourier Transform with an ?_{?} Guarantee

    Get PDF
    In this paper we revisit the deterministic version of the Sparse Fourier Transform problem, which asks to read only a few entries of x∈Cnx \in \mathbb{C}^n and design a recovery algorithm such that the output of the algorithm approximates x^\hat x, the Discrete Fourier Transform (DFT) of xx. The randomized case has been well-understood, while the main work in the deterministic case is that of Merhi et al.\@ (J Fourier Anal Appl 2018), which obtains O(k2logβ‘βˆ’1kβ‹…log⁑5.5n)O(k^2 \log^{-1}k \cdot \log^{5.5}n) samples and a similar runtime with the β„“2/β„“1\ell_2/\ell_1 guarantee. We focus on the stronger β„“βˆž/β„“1\ell_{\infty}/\ell_1 guarantee and the closely related problem of incoherent matrices. We list our contributions as follows. 1. We find a deterministic collection of O(k2log⁑n)O(k^2 \log n) samples for the β„“βˆž/β„“1\ell_\infty/\ell_1 recovery in time O(nklog⁑2n)O(nk \log^2 n), and a deterministic collection of O(k2log⁑2n)O(k^2 \log^2 n) samples for the β„“βˆž/β„“1\ell_\infty/\ell_1 sparse recovery in time O(k2log⁑3n)O(k^2 \log^3n). 2. We give new deterministic constructions of incoherent matrices that are row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein's inequality and bounds on exponential sums considered in analytic number theory. Our first construction matches a previous randomized construction of Nelson, Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of the incoherent matrix. Our algorithms are nearly sample-optimal, since a lower bound of Ξ©(k2+klog⁑n)\Omega(k^2 + k \log n) is known, even for the case where the sensing matrix can be arbitrarily designed. A similar lower bound of Ξ©(k2log⁑n/log⁑k)\Omega(k^2 \log n/ \log k) is known for incoherent matrices.Comment: ICALP 2020--presentation improved according to reviewers' comment
    • …
    corecore