64 research outputs found

    Lower Bounds for Sparse Recovery

    Get PDF
    We consider the following k-sparse recovery problem: design an m x n matrix A, such that for any signal x, given Ax we can efficiently recover x' satisfying ||x-x'||_1 <= C min_{k-sparse} x"} ||x-x"||_1. It is known that there exist matrices A with this property that have only O(k log (n/k)) rows. In this paper we show that this bound is tight. Our bound holds even for the more general /randomized/ version of the problem, where A is a random variable and the recovery algorithm is required to work for any fixed x with constant probability (over A).Comment: 11 pages. Appeared at SODA 201

    Lower bounds for sparse recovery

    Get PDF
    We consider the following k-sparse recovery problem: design an m x n matrix A, such that for any signal x, given Ax we can efficiently recover ^x satisfying x|| ^x||1 [less than or equal to] C min[subscript k]-sparse x'||x - x'||1. It is known that there exist matrices A with this property that have only O(k log(n=k)) rows. In this paper we show that this bound is tight. Our bound holds even for the more general random- ized version of the problem, where A is a random variable, and the recovery algorithm is required to work for any fixed x with constant probability (over A).David & Lucile Packard FoundationDanish National Research FoundationDanish National Research Foundation (MADALGO (Center for Massive Data Algorithmics))National Science Foundation (U.S.) (grant CCF-0728645)Cisco Community Fellowship Progra

    Algorithms and lower bounds for sparse recovery

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 69-71).We consider the following k-sparse recovery problem: design a distribution of m x n matrix A, such that for any signal x, given Ax with high probability we can efficiently recover x satisfying IIx - x l, </-Cmink-sparse x' IIx - x'II. It is known that there exist such distributions with m = O(k log(n/k)) rows; in this thesis, we show that this bound is tight. We also introduce the set query algorithm, a primitive useful for solving special cases of sparse recovery using less than 8(k log(n/k)) rows. The set query algorithm estimates the values of a vector x [epsilon] Rn over a support S of size k from a randomized sparse binary linear sketch Ax of size O(k). Given Ax and S, we can recover x' with IIlx' - xSII2 </- [theta]IIx - xsII2 with probability at least 1 - k-[omega](1). The recovery takes O(k) time. While interesting in its own right, this primitive also has a number of applications. For example, we can: * Improve the sparse recovery of Zipfian distributions O(k log n) measurements from a 1 + [epsilon] approximation to a 1 + o(1) approximation, giving the first such approximation when k </- O(n1-[epsilon]). * Recover block-sparse vectors with O(k) space and a 1 + [epsilon] approximation. Previous algorithms required either w(k) space or w(1) approximation.by Eric Price.M.Eng

    An Improved Lower Bound for Sparse Reconstruction from Subsampled Hadamard Matrices

    Full text link
    We give a short argument that yields a new lower bound on the number of subsampled rows from a bounded, orthonormal matrix necessary to form a matrix with the restricted isometry property. We show that a matrix formed by uniformly subsampling rows of an N×NN \times N Hadamard matrix contains a KK-sparse vector in the kernel, unless the number of subsampled rows is Ω(KlogKlog(N/K))\Omega(K \log K \log (N/K)) --- our lower bound applies whenever min(K,N/K)>logCN\min(K, N/K) > \log^C N. Containing a sparse vector in the kernel precludes not only the restricted isometry property, but more generally the application of those matrices for uniform sparse recovery.Comment: Improved exposition and added an autho

    Recovering Jointly Sparse Signals via Joint Basis Pursuit

    Get PDF
    This work considers recovery of signals that are sparse over two bases. For instance, a signal might be sparse in both time and frequency, or a matrix can be low rank and sparse simultaneously. To facilitate recovery, we consider minimizing the sum of the 1\ell_1-norms that correspond to each basis, which is a tractable convex approach. We find novel optimality conditions which indicates a gain over traditional approaches where 1\ell_1 minimization is done over only one basis. Next, we analyze these optimality conditions for the particular case of time-frequency bases. Denoting sparsity in the first and second bases by k1,k2k_1,k_2 respectively, we show that, for a general class of signals, using this approach, one requires as small as O(max{k1,k2}loglogn)O(\max\{k_1,k_2\}\log\log n) measurements for successful recovery hence overcoming the classical requirement of Θ(min{k1,k2}log(nmin{k1,k2}))\Theta(\min\{k_1,k_2\}\log(\frac{n}{\min\{k_1,k_2\}})) for 1\ell_1 minimization when k1k2k_1\approx k_2. Extensive simulations show that, our analysis is approximately tight.Comment: 8 pages, 1 figure, submitted to ISIT 201

    On Model-Based RIP-1 Matrices

    Get PDF
    The Restricted Isometry Property (RIP) is a fundamental property of a matrix enabling sparse recovery. Informally, an m x n matrix satisfies RIP of order k in the l_p norm if ||Ax||_p \approx ||x||_p for any vector x that is k-sparse, i.e., that has at most k non-zeros. The minimal number of rows m necessary for the property to hold has been extensively investigated, and tight bounds are known. Motivated by signal processing models, a recent work of Baraniuk et al has generalized this notion to the case where the support of x must belong to a given model, i.e., a given family of supports. This more general notion is much less understood, especially for norms other than l_2. In this paper we present tight bounds for the model-based RIP property in the l_1 norm. Our bounds hold for the two most frequently investigated models: tree-sparsity and block-sparsity. We also show implications of our results to sparse recovery problems.Comment: Version 3 corrects a few errors present in the earlier version. In particular, it states and proves correct upper and lower bounds for the number of rows in RIP-1 matrices for the block-sparse model. The bounds are of the form k log_b n, not k log_k n as stated in the earlier versio

    Approximate Sparse Recovery: Optimizing Time and Measurements

    Full text link
    An approximate sparse recovery system consists of parameters k,Nk,N, an mm-by-NN measurement matrix, Φ\Phi, and a decoding algorithm, D\mathcal{D}. Given a vector, xx, the system approximates xx by x^=D(Φx)\widehat x =\mathcal{D}(\Phi x), which must satisfy x^x2Cxxk2\| \widehat x - x\|_2\le C \|x - x_k\|_2, where xkx_k denotes the optimal kk-term approximation to xx. For each vector xx, the system must succeed with probability at least 3/4. Among the goals in designing such systems are minimizing the number mm of measurements and the runtime of the decoding algorithm, D\mathcal{D}. In this paper, we give a system with m=O(klog(N/k))m=O(k \log(N/k)) measurements--matching a lower bound, up to a constant factor--and decoding time O(klogcN)O(k\log^c N), matching a lower bound up to log(N)\log(N) factors. We also consider the encode time (i.e., the time to multiply Φ\Phi by xx), the time to update measurements (i.e., the time to multiply Φ\Phi by a 1-sparse xx), and the robustness and stability of the algorithm (adding noise before and after the measurements). Our encode and update times are optimal up to log(N)\log(N) factors
    corecore