610 research outputs found
Frame expansions with erasures: an approach through the non-commutative operator theory
In modern communication systems such as the Internet, random losses of
information can be mitigated by oversampling the source. This is equivalent to
expanding the source using overcomplete systems of vectors (frames), as opposed
to the traditional basis expansions. Dependencies among the coefficients in
frame expansions often allow for better performance comparing to bases under
random losses of coefficients. We show that for any n-dimensional frame, any
source can be linearly reconstructed from only (n log n) randomly chosen frame
coefficients, with a small error and with high probability. Thus every frame
expansion withstands random losses better (for worst case sources) than the
orthogonal basis expansion, for which the (n log n) bound is attained. The
proof reduces to M.Rudelson's selection theorem on random vectors in the
isotropic position, which is based on the non-commutative Khinchine's
inequality.Comment: 12 page
Coding overcomplete representations of audio using the MCLT
We propose a system for audio coding using the modulated complex
lapped transform (MCLT). In general, it is difficult to encode signals using
overcomplete representations without avoiding a penalty in rate-distortion
performance. We show that the penalty can be significantly reduced for
MCLT-based representations, without the need for iterative methods of
sparsity reduction. We achieve that via a magnitude-phase polar quantization
and the use of magnitude and phase prediction. Compared to systems based
on quantization of orthogonal representations such as the modulated lapped
transform (MLT), the new system allows for reduced warbling artifacts and
more precise computation of frequency-domain auditory masking functions
Multiple-Description Coding by Dithered Delta-Sigma Quantization
We address the connection between the multiple-description (MD) problem and
Delta-Sigma quantization. The inherent redundancy due to oversampling in
Delta-Sigma quantization, and the simple linear-additive noise model resulting
from dithered lattice quantization, allow us to construct a symmetric and
time-invariant MD coding scheme. We show that the use of a noise shaping filter
makes it possible to trade off central distortion for side distortion.
Asymptotically as the dimension of the lattice vector quantizer and order of
the noise shaping filter approach infinity, the entropy rate of the dithered
Delta-Sigma quantization scheme approaches the symmetric two-channel MD
rate-distortion function for a memoryless Gaussian source and MSE fidelity
criterion, at any side-to-central distortion ratio and any resolution. In the
optimal scheme, the infinite-order noise shaping filter must be minimum phase
and have a piece-wise flat power spectrum with a single jump discontinuity. An
important advantage of the proposed design is that it is symmetric in rate and
distortion by construction, so the coding rates of the descriptions are
identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has
been fixed. Accepted for publication in the IEEE Transactions on Information
Theor
Geometric approach to error correcting codes and reconstruction of signals
We develop an approach through geometric functional analysis to error
correcting codes and to reconstruction of signals from few linear measurements.
An error correcting code encodes an n-letter word x into an m-letter word y in
such a way that x can be decoded correctly when any r letters of y are
corrupted. We prove that most linear orthogonal transformations Q from R^n into
R^m form efficient and robust robust error correcting codes over reals. The
decoder (which corrects the corrupted components of y) is the metric projection
onto the range of Q in the L_1 norm. An equivalent problem arises in signal
processing: how to reconstruct a signal that belongs to a small class from few
linear measurements? We prove that for most sets of Gaussian measurements, all
signals of small support can be exactly reconstructed by the L_1 norm
minimization. This is a substantial improvement of recent results of Donoho and
of Candes and Tao. An equivalent problem in combinatorial geometry is the
existence of a polytope with fixed number of facets and maximal number of
lower-dimensional facets. We prove that most sections of the cube form such
polytopes.Comment: 17 pages, 3 figure
Message-Passing Estimation from Quantized Samples
Estimation of a vector from quantized linear measurements is a common problem
for which simple linear techniques are suboptimal -- sometimes greatly so. This
paper develops generalized approximate message passing (GAMP) algorithms for
minimum mean-squared error estimation of a random vector from quantized linear
measurements, notably allowing the linear expansion to be overcomplete or
undercomplete and the scalar quantization to be regular or non-regular. GAMP is
a recently-developed class of algorithms that uses Gaussian approximations in
belief propagation and allows arbitrary separable input and output channels.
Scalar quantization of measurements is incorporated into the output channel
formalism, leading to the first tractable and effective method for
high-dimensional estimation problems involving non-regular scalar quantization.
Non-regular quantization is empirically demonstrated to greatly improve
rate-distortion performance in some problems with oversampling or with
undersampling combined with a sparsity-inducing prior. Under the assumption of
a Gaussian measurement matrix with i.i.d. entries, the asymptotic error
performance of GAMP can be accurately predicted and tracked through the state
evolution formalism. We additionally use state evolution to design MSE-optimal
scalar quantizers for GAMP signal reconstruction and empirically demonstrate
the superior error performance of the resulting quantizers.Comment: 12 pages, 8 figure
Linearized Quantum Gravity Using the Projection Operator Formalism
The theory of canonical linearized gravity is quantized using the Projection
Operator formalism, in which no gauge or coordinate choices are made. The ADM
Hamiltonian is used and the canonical variables and constraints are expanded
around a flat background. As a result of the coordinate independence and linear
truncation of the perturbation series, the constraint algebra surprisingly
becomes partially second-class in both the classical and quantum pictures after
all secondary constraints are considered. While new features emerge in the
constraint structure, the end result is the same as previously reported: the
(separable) physical Hilbert space still only depends on the
transverse-traceless degrees of freedom.Comment: 30 pages, no figures, enlarged and corrected versio
Precision Enhancement of 3D Surfaces from Multiple Compressed Depth Maps
In texture-plus-depth representation of a 3D scene, depth maps from different
camera viewpoints are typically lossily compressed via the classical transform
coding / coefficient quantization paradigm. In this paper we propose to reduce
distortion of the decoded depth maps due to quantization. The key observation
is that depth maps from different viewpoints constitute multiple descriptions
(MD) of the same 3D scene. Considering the MD jointly, we perform a POCS-like
iterative procedure to project a reconstructed signal from one depth map to the
other and back, so that the converged depth maps have higher precision than the
original quantized versions.Comment: This work was accepted as ongoing work paper in IEEE MMSP'201
Frame Permutation Quantization
Frame permutation quantization (FPQ) is a new vector quantization technique
using finite frames. In FPQ, a vector is encoded using a permutation source
code to quantize its frame expansion. This means that the encoding is a partial
ordering of the frame expansion coefficients. Compared to ordinary permutation
source coding, FPQ produces a greater number of possible quantization rates and
a higher maximum rate. Various representations for the partitions induced by
FPQ are presented, and reconstruction algorithms based on linear programming,
quadratic programming, and recursive orthogonal projection are derived.
Implementations of the linear and quadratic programming algorithms for uniform
and Gaussian sources show performance improvements over entropy-constrained
scalar quantization for certain combinations of vector dimension and coding
rate. Monte Carlo evaluation of the recursive algorithm shows that mean-squared
error (MSE) decays as 1/M^4 for an M-element frame, which is consistent with
previous results on optimal decay of MSE. Reconstruction using the canonical
dual frame is also studied, and several results relate properties of the
analysis frame to whether linear reconstruction techniques provide consistent
reconstructions.Comment: 29 pages, 5 figures; detailed added to proof of Theorem 4.3 and a few
minor correction
- âŠ