1,238 research outputs found
Highly Robust Error Correction by Convex Programming
This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x ∈ ℝ^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g., quantization errors).
We show that if one encodes the information as Ax where A ∈
ℝ^(m x n) (m ≥ n) is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well
Decoding by Linear Programming
This paper considers the classical error correcting problem which is
frequently discussed in coding theory. We wish to recover an input vector from corrupted measurements . Here, is an by
(coding) matrix and is an arbitrary and unknown vector of errors. Is it
possible to recover exactly from the data ? We prove that under suitable
conditions on the coding matrix , the input is the unique solution to
the -minimization problem () provided that the support of the vector of
errors is not too large, for some . In short, can be recovered exactly by solving a
simple convex optimization problem (which one can recast as a linear program).
In addition, numerical experiments suggest that this recovery procedure works
unreasonably well; is recovered exactly even in situations where a
significant fraction of the output is corrupted.Comment: 22 pages, 4 figures, submitte
The Restricted Isometry Property of Subsampled Fourier Matrices
A matrix satisfies the restricted isometry
property of order with constant if it preserves the
norm of all -sparse vectors up to a factor of . We prove
that a matrix obtained by randomly sampling rows from an Fourier matrix satisfies the restricted
isometry property of order with a fixed with high
probability. This improves on Rudelson and Vershynin (Comm. Pure Appl. Math.,
2008), its subsequent improvements, and Bourgain (GAFA Seminar Notes, 2014).Comment: 16 page
On the Doubly Sparse Compressed Sensing Problem
A new variant of the Compressed Sensing problem is investigated when the
number of measurements corrupted by errors is upper bounded by some value l but
there are no more restrictions on errors. We prove that in this case it is
enough to make 2(t+l) measurements, where t is the sparsity of original data.
Moreover for this case a rather simple recovery algorithm is proposed. An
analog of the Singleton bound from coding theory is derived what proves
optimality of the corresponding measurement matrices.Comment: 6 pages, IMACC2015 (accepted
Quantization and Compressive Sensing
Quantization is an essential step in digitizing signals, and, therefore, an
indispensable component of any modern acquisition system. This book chapter
explores the interaction of quantization and compressive sensing and examines
practical quantization strategies for compressive acquisition systems.
Specifically, we first provide a brief overview of quantization and examine
fundamental performance bounds applicable to any quantization approach. Next,
we consider several forms of scalar quantizers, namely uniform, non-uniform,
and 1-bit. We provide performance bounds and fundamental analysis, as well as
practical quantizer designs and reconstruction algorithms that account for
quantization. Furthermore, we provide an overview of Sigma-Delta
() quantization in the compressed sensing context, and also
discuss implementation issues, recovery algorithms and performance bounds. As
we demonstrate, proper accounting for quantization and careful quantizer design
has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing
and Its Applications", 201
Highly robust error correction by convex programming
This paper discusses a stylized communications problem where one wishes to
transmit a real-valued signal x in R^n (a block of n pieces of information) to
a remote receiver. We ask whether it is possible to transmit this information
reliably when a fraction of the transmitted codeword is corrupted by arbitrary
gross errors, and when in addition, all the entries of the codeword are
contaminated by smaller errors (e.g. quantization errors).
We show that if one encodes the information as Ax where A is a suitable m by
n coding matrix (m >= n), there are two decoding schemes that allow the
recovery of the block of n pieces of information x with nearly the same
accuracy as if no gross errors occur upon transmission (or equivalently as if
one has an oracle supplying perfect information about the sites and amplitudes
of the gross errors). Moreover, both decoding strategies are very concrete and
only involve solving simple convex optimization programs, either a linear
program or a second-order cone program. We complement our study with numerical
simulations showing that the encoder/decoder pair performs remarkably well.Comment: 23 pages, 2 figure
- …