195 research outputs found
Frame expansions with erasures: an approach through the non-commutative operator theory
In modern communication systems such as the Internet, random losses of
information can be mitigated by oversampling the source. This is equivalent to
expanding the source using overcomplete systems of vectors (frames), as opposed
to the traditional basis expansions. Dependencies among the coefficients in
frame expansions often allow for better performance comparing to bases under
random losses of coefficients. We show that for any n-dimensional frame, any
source can be linearly reconstructed from only (n log n) randomly chosen frame
coefficients, with a small error and with high probability. Thus every frame
expansion withstands random losses better (for worst case sources) than the
orthogonal basis expansion, for which the (n log n) bound is attained. The
proof reduces to M.Rudelson's selection theorem on random vectors in the
isotropic position, which is based on the non-commutative Khinchine's
inequality.Comment: 12 page
Geometric approach to error correcting codes and reconstruction of signals
We develop an approach through geometric functional analysis to error
correcting codes and to reconstruction of signals from few linear measurements.
An error correcting code encodes an n-letter word x into an m-letter word y in
such a way that x can be decoded correctly when any r letters of y are
corrupted. We prove that most linear orthogonal transformations Q from R^n into
R^m form efficient and robust robust error correcting codes over reals. The
decoder (which corrects the corrupted components of y) is the metric projection
onto the range of Q in the L_1 norm. An equivalent problem arises in signal
processing: how to reconstruct a signal that belongs to a small class from few
linear measurements? We prove that for most sets of Gaussian measurements, all
signals of small support can be exactly reconstructed by the L_1 norm
minimization. This is a substantial improvement of recent results of Donoho and
of Candes and Tao. An equivalent problem in combinatorial geometry is the
existence of a polytope with fixed number of facets and maximal number of
lower-dimensional facets. We prove that most sections of the cube form such
polytopes.Comment: 17 pages, 3 figure
Message-Passing Estimation from Quantized Samples
Estimation of a vector from quantized linear measurements is a common problem
for which simple linear techniques are suboptimal -- sometimes greatly so. This
paper develops generalized approximate message passing (GAMP) algorithms for
minimum mean-squared error estimation of a random vector from quantized linear
measurements, notably allowing the linear expansion to be overcomplete or
undercomplete and the scalar quantization to be regular or non-regular. GAMP is
a recently-developed class of algorithms that uses Gaussian approximations in
belief propagation and allows arbitrary separable input and output channels.
Scalar quantization of measurements is incorporated into the output channel
formalism, leading to the first tractable and effective method for
high-dimensional estimation problems involving non-regular scalar quantization.
Non-regular quantization is empirically demonstrated to greatly improve
rate-distortion performance in some problems with oversampling or with
undersampling combined with a sparsity-inducing prior. Under the assumption of
a Gaussian measurement matrix with i.i.d. entries, the asymptotic error
performance of GAMP can be accurately predicted and tracked through the state
evolution formalism. We additionally use state evolution to design MSE-optimal
scalar quantizers for GAMP signal reconstruction and empirically demonstrate
the superior error performance of the resulting quantizers.Comment: 12 pages, 8 figure
Frame Permutation Quantization
Frame permutation quantization (FPQ) is a new vector quantization technique
using finite frames. In FPQ, a vector is encoded using a permutation source
code to quantize its frame expansion. This means that the encoding is a partial
ordering of the frame expansion coefficients. Compared to ordinary permutation
source coding, FPQ produces a greater number of possible quantization rates and
a higher maximum rate. Various representations for the partitions induced by
FPQ are presented, and reconstruction algorithms based on linear programming,
quadratic programming, and recursive orthogonal projection are derived.
Implementations of the linear and quadratic programming algorithms for uniform
and Gaussian sources show performance improvements over entropy-constrained
scalar quantization for certain combinations of vector dimension and coding
rate. Monte Carlo evaluation of the recursive algorithm shows that mean-squared
error (MSE) decays as 1/M^4 for an M-element frame, which is consistent with
previous results on optimal decay of MSE. Reconstruction using the canonical
dual frame is also studied, and several results relate properties of the
analysis frame to whether linear reconstruction techniques provide consistent
reconstructions.Comment: 29 pages, 5 figures; detailed added to proof of Theorem 4.3 and a few
minor correction
Consistent Basis Pursuit for Signal and Matrix Estimates in Quantized Compressed Sensing
This paper focuses on the estimation of low-complexity signals when they are
observed through uniformly quantized compressive observations. Among such
signals, we consider 1-D sparse vectors, low-rank matrices, or compressible
signals that are well approximated by one of these two models. In this context,
we prove the estimation efficiency of a variant of Basis Pursuit Denoise,
called Consistent Basis Pursuit (CoBP), enforcing consistency between the
observations and the re-observed estimate, while promoting its low-complexity
nature. We show that the reconstruction error of CoBP decays like
when all parameters but are fixed. Our proof is connected to recent bounds
on the proximity of vectors or matrices when (i) those belong to a set of small
intrinsic "dimension", as measured by the Gaussian mean width, and (ii) they
share the same quantized (dithered) random projections. By solving CoBP with a
proximal algorithm, we provide some extensive numerical observations that
confirm the theoretical bound as is increased, displaying even faster error
decay than predicted. The same phenomenon is observed in the special, yet
important case of 1-bit CS.Comment: Keywords: Quantized compressed sensing, quantization, consistency,
error decay, low-rank, sparsity. 10 pages, 3 figures. Note abbout this
version: title change, typo corrections, clarification of the context, adding
a comparison with BPD
Quantization and Compressive Sensing
Quantization is an essential step in digitizing signals, and, therefore, an
indispensable component of any modern acquisition system. This book chapter
explores the interaction of quantization and compressive sensing and examines
practical quantization strategies for compressive acquisition systems.
Specifically, we first provide a brief overview of quantization and examine
fundamental performance bounds applicable to any quantization approach. Next,
we consider several forms of scalar quantizers, namely uniform, non-uniform,
and 1-bit. We provide performance bounds and fundamental analysis, as well as
practical quantizer designs and reconstruction algorithms that account for
quantization. Furthermore, we provide an overview of Sigma-Delta
() quantization in the compressed sensing context, and also
discuss implementation issues, recovery algorithms and performance bounds. As
we demonstrate, proper accounting for quantization and careful quantizer design
has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing
and Its Applications", 201
- …