143 research outputs found

    One-Bit ExpanderSketch for One-Bit Compressed Sensing

    Full text link
    Is it possible to obliviously construct a set of hyperplanes H such that you can approximate a unit vector x when you are given the side on which the vector lies with respect to every h in H? In the sparse recovery literature, where x is approximately k-sparse, this problem is called one-bit compressed sensing and has received a fair amount of attention the last decade. In this paper we obtain the first scheme that achieves almost optimal measurements and sublinear decoding time for one-bit compressed sensing in the non-uniform case. For a large range of parameters, we improve the state of the art in both the number of measurements and the decoding time

    Sublinear-Time Algorithms for Compressive Phase Retrieval

    Full text link
    In the compressive phase retrieval problem, or phaseless compressed sensing, or compressed sensing from intensity only measurements, the goal is to reconstruct a sparse or approximately kk-sparse vector xRnx \in \mathbb{R}^n given access to y=Φxy= |\Phi x|, where v|v| denotes the vector obtained from taking the absolute value of vRnv\in\mathbb{R}^n coordinate-wise. In this paper we present sublinear-time algorithms for different variants of the compressive phase retrieval problem which are akin to the variants considered for the classical compressive sensing problem in theoretical computer science. Our algorithms use pure combinatorial techniques and near-optimal number of measurements.Comment: The ell_2/ell_2 algorithm was substituted by a modification of the ell_infty/ell_2 algorithm which strictly subsumes i

    On Fast Decoding of High-Dimensional Signals from One-Bit Measurements

    Get PDF
    In the problem of one-bit compressed sensing, the goal is to find a delta-close estimation of a k-sparse vector x in R^n given the signs of the entries of y = Phi x, where Phi is called the measurement matrix. For the one-bit compressed sensing problem, previous work [Plan, 2013][Gopi, 2013] achieved Theta (delta^{-2} k log(n/k)) and O~( 1/delta k log (n/k)) measurements, respectively, but the decoding time was Omega ( n k log (n/k)). In this paper, using tools and techniques developed in the context of two-stage group testing and streaming algorithms, we contribute towards the direction of sub-linear decoding time. We give a variety of schemes for the different versions of one-bit compressed sensing, such as the for-each and for-all versions, and for support recovery; all these have at most a log k overhead in the number of measurements and poly(k, log n) decoding time, which is an exponential improvement over previous work, in terms of the dependence on n

    Deterministic Sparse Fourier Transform with an ?_{?} Guarantee

    Get PDF
    In this paper we revisit the deterministic version of the Sparse Fourier Transform problem, which asks to read only a few entries of xCnx \in \mathbb{C}^n and design a recovery algorithm such that the output of the algorithm approximates x^\hat x, the Discrete Fourier Transform (DFT) of xx. The randomized case has been well-understood, while the main work in the deterministic case is that of Merhi et al.\@ (J Fourier Anal Appl 2018), which obtains O(k2log1klog5.5n)O(k^2 \log^{-1}k \cdot \log^{5.5}n) samples and a similar runtime with the 2/1\ell_2/\ell_1 guarantee. We focus on the stronger /1\ell_{\infty}/\ell_1 guarantee and the closely related problem of incoherent matrices. We list our contributions as follows. 1. We find a deterministic collection of O(k2logn)O(k^2 \log n) samples for the /1\ell_\infty/\ell_1 recovery in time O(nklog2n)O(nk \log^2 n), and a deterministic collection of O(k2log2n)O(k^2 \log^2 n) samples for the /1\ell_\infty/\ell_1 sparse recovery in time O(k2log3n)O(k^2 \log^3n). 2. We give new deterministic constructions of incoherent matrices that are row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein's inequality and bounds on exponential sums considered in analytic number theory. Our first construction matches a previous randomized construction of Nelson, Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of the incoherent matrix. Our algorithms are nearly sample-optimal, since a lower bound of Ω(k2+klogn)\Omega(k^2 + k \log n) is known, even for the case where the sensing matrix can be arbitrarily designed. A similar lower bound of Ω(k2logn/logk)\Omega(k^2 \log n/ \log k) is known for incoherent matrices.Comment: ICALP 2020--presentation improved according to reviewers' comment

    For-all Sparse Recovery in Near-optimal Time

    No full text
    An approximate sparse recovery system in 1\ell_1 norm consists of parameters kk, ϵ\epsilon, NN, an mm-by-NN measurement Φ\Phi, and a recovery algorithm, R\mathcal{R}. Given a vector, x\mathbf{x}, the system approximates xx by x^=R(Φx)\widehat{\mathbf{x}} = \mathcal{R}(\Phi\mathbf{x}), which must satisfy x^x1(1+ϵ)xxk1\|\widehat{\mathbf{x}}-\mathbf{x}\|_1 \leq (1+\epsilon)\|\mathbf{x}-\mathbf{x}_k\|_1. We consider the 'for all' model, in which a single matrix Φ\Phi, possibly 'constructed' non-explicitly using the probabilistic method, is used for all signals x\mathbf{x}. The best existing sublinear algorithm by Porat and Strauss (SODA'12) uses O(ϵ3klog(N/k))O(\epsilon^{-3} k\log(N/k)) measurements and runs in time O(k1αNα)O(k^{1-\alpha}N^\alpha) for any constant α>0\alpha > 0. In this paper, we improve the number of measurements to O(ϵ2klog(N/k))O(\epsilon^{-2} k \log(N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k1+βpoly(logN,1/ϵ))O(k^{1+\beta}\textrm{poly}(\log N,1/\epsilon)), with a modest restriction that ϵ(logk/logN)γ\epsilon \leq (\log k/\log N)^{\gamma}, for any constants β,γ>0\beta,\gamma > 0. When klogcNk\leq \log^c N for some c>0c>0, the runtime is reduced to O(kpoly(N,1/ϵ))O(k\textrm{poly}(N,1/\epsilon)). With no restrictions on ϵ\epsilon, we have an approximation recovery system with m=O(k/ϵlog(N/k)((logN/logk)γ+1/ϵ))m = O(k/\epsilon \log(N/k)((\log N/\log k)^\gamma + 1/\epsilon)) measurements
    corecore