1,238 research outputs found

    Highly Robust Error Correction by Convex Programming

    Get PDF
    This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x ∈ ℝ^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g., quantization errors). We show that if one encodes the information as Ax where A ∈ ℝ^(m x n) (m ≥ n) is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well

    Decoding by Linear Programming

    Get PDF
    This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector fRnf \in \R^n from corrupted measurements y=Af+ey = A f + e. Here, AA is an mm by nn (coding) matrix and ee is an arbitrary and unknown vector of errors. Is it possible to recover ff exactly from the data yy? We prove that under suitable conditions on the coding matrix AA, the input ff is the unique solution to the 1\ell_1-minimization problem (x1:=ixi\|x\|_{\ell_1} := \sum_i |x_i|) mingRnyAg1 \min_{g \in \R^n} \| y - Ag \|_{\ell_1} provided that the support of the vector of errors is not too large, e0:={i:ei0}ρm\|e\|_{\ell_0} := |\{i : e_i \neq 0\}| \le \rho \cdot m for some ρ>0\rho > 0. In short, ff can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; ff is recovered exactly even in situations where a significant fraction of the output is corrupted.Comment: 22 pages, 4 figures, submitte

    The Restricted Isometry Property of Subsampled Fourier Matrices

    Full text link
    A matrix ACq×NA \in \mathbb{C}^{q \times N} satisfies the restricted isometry property of order kk with constant ε\varepsilon if it preserves the 2\ell_2 norm of all kk-sparse vectors up to a factor of 1±ε1\pm \varepsilon. We prove that a matrix AA obtained by randomly sampling q=O(klog2klogN)q = O(k \cdot \log^2 k \cdot \log N) rows from an N×NN \times N Fourier matrix satisfies the restricted isometry property of order kk with a fixed ε\varepsilon with high probability. This improves on Rudelson and Vershynin (Comm. Pure Appl. Math., 2008), its subsequent improvements, and Bourgain (GAFA Seminar Notes, 2014).Comment: 16 page

    On the Doubly Sparse Compressed Sensing Problem

    Full text link
    A new variant of the Compressed Sensing problem is investigated when the number of measurements corrupted by errors is upper bounded by some value l but there are no more restrictions on errors. We prove that in this case it is enough to make 2(t+l) measurements, where t is the sparsity of original data. Moreover for this case a rather simple recovery algorithm is proposed. An analog of the Singleton bound from coding theory is derived what proves optimality of the corresponding measurement matrices.Comment: 6 pages, IMACC2015 (accepted

    Quantization and Compressive Sensing

    Get PDF
    Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This book chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Specifically, we first provide a brief overview of quantization and examine fundamental performance bounds applicable to any quantization approach. Next, we consider several forms of scalar quantizers, namely uniform, non-uniform, and 1-bit. We provide performance bounds and fundamental analysis, as well as practical quantizer designs and reconstruction algorithms that account for quantization. Furthermore, we provide an overview of Sigma-Delta (ΣΔ\Sigma\Delta) quantization in the compressed sensing context, and also discuss implementation issues, recovery algorithms and performance bounds. As we demonstrate, proper accounting for quantization and careful quantizer design has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing and Its Applications", 201

    Highly robust error correction by convex programming

    Full text link
    This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x in R^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g. quantization errors). We show that if one encodes the information as Ax where A is a suitable m by n coding matrix (m >= n), there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occur upon transmission (or equivalently as if one has an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well.Comment: 23 pages, 2 figure
    corecore