31,855 research outputs found
Erasure Multiple Descriptions
We consider a binary erasure version of the n-channel multiple descriptions
problem with symmetric descriptions, i.e., the rates of the n descriptions are
the same and the distortion constraint depends only on the number of messages
received. We consider the case where there is no excess rate for every k out of
n descriptions. Our goal is to characterize the achievable distortions D_1,
D_2,...,D_n. We measure the fidelity of reconstruction using two distortion
criteria: an average-case distortion criterion, under which distortion is
measured by taking the average of the per-letter distortion over all source
sequences, and a worst-case distortion criterion, under which distortion is
measured by taking the maximum of the per-letter distortion over all source
sequences. We present achievability schemes, based on random binning for
average-case distortion and systematic MDS (maximum distance separable) codes
for worst-case distortion, and prove optimality results for the corresponding
achievable distortion regions. We then use the binary erasure multiple
descriptions setup to propose a layered coding framework for multiple
descriptions, which we then apply to vector Gaussian multiple descriptions and
prove its optimality for symmetric scalar Gaussian multiple descriptions with
two levels of receivers and no excess rate for the central receiver. We also
prove a new outer bound for the general multi-terminal source coding problem
and use it to prove an optimality result for the robust binary erasure CEO
problem. For the latter, we provide a tight lower bound on the distortion for
\ell messages for any coding scheme that achieves the minimum achievable
distortion for k messages where k is less than or equal to \ell.Comment: 48 pages, 2 figures, submitted to IEEE Trans. Inf. Theor
Multiuser Successive Refinement and Multiple Description Coding
We consider the multiuser successive refinement (MSR) problem, where the
users are connected to a central server via links with different noiseless
capacities, and each user wishes to reconstruct in a successive-refinement
fashion. An achievable region is given for the two-user two-layer case and it
provides the complete rate-distortion region for the Gaussian source under the
MSE distortion measure. The key observation is that this problem includes the
multiple description (MD) problem (with two descriptions) as a subsystem, and
the techniques useful in the MD problem can be extended to this case. We show
that the coding scheme based on the universality of random binning is
sub-optimal, because multiple Gaussian side informations only at the decoders
do incur performance loss, in contrast to the case of single side information
at the decoder. We further show that unlike the single user case, when there
are multiple users, the loss of performance by a multistage coding approach can
be unbounded for the Gaussian source. The result suggests that in such a
setting, the benefit of using successive refinement is not likely to justify
the accompanying performance loss. The MSR problem is also related to the
source coding problem where each decoder has its individual side information,
while the encoder has the complete set of the side informations. The MSR
problem further includes several variations of the MD problem, for which the
specialization of the general result is investigated and the implication is
discussed.Comment: 10 pages, 5 figures. To appear in IEEE Transaction on Information
Theory. References updated and typos correcte
Multiple Description Quantization via Gram-Schmidt Orthogonalization
The multiple description (MD) problem has received considerable attention as
a model of information transmission over unreliable channels. A general
framework for designing efficient multiple description quantization schemes is
proposed in this paper. We provide a systematic treatment of the El Gamal-Cover
(EGC) achievable MD rate-distortion region, and show that any point in the EGC
region can be achieved via a successive quantization scheme along with
quantization splitting. For the quadratic Gaussian case, the proposed scheme
has an intrinsic connection with the Gram-Schmidt orthogonalization, which
implies that the whole Gaussian MD rate-distortion region is achievable with a
sequential dithered lattice-based quantization scheme as the dimension of the
(optimal) lattice quantizers becomes large. Moreover, this scheme is shown to
be universal for all i.i.d. smooth sources with performance no worse than that
for an i.i.d. Gaussian source with the same variance and asymptotically optimal
at high resolution. A class of low-complexity MD scalar quantizers in the
proposed general framework also is constructed and is illustrated
geometrically; the performance is analyzed in the high resolution regime, which
exhibits a noticeable improvement over the existing MD scalar quantization
schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor
Source-Channel Diversity for Parallel Channels
We consider transmitting a source across a pair of independent, non-ergodic
channels with random states (e.g., slow fading channels) so as to minimize the
average distortion. The general problem is unsolved. Hence, we focus on
comparing two commonly used source and channel encoding systems which
correspond to exploiting diversity either at the physical layer through
parallel channel coding or at the application layer through multiple
description source coding.
For on-off channel models, source coding diversity offers better performance.
For channels with a continuous range of reception quality, we show the reverse
is true. Specifically, we introduce a new figure of merit called the distortion
exponent which measures how fast the average distortion decays with SNR. For
continuous-state models such as additive white Gaussian noise channels with
multiplicative Rayleigh fading, optimal channel coding diversity at the
physical layer is more efficient than source coding diversity at the
application layer in that the former achieves a better distortion exponent.
Finally, we consider a third decoding architecture: multiple description
encoding with a joint source-channel decoding. We show that this architecture
achieves the same distortion exponent as systems with optimal channel coding
diversity for continuous-state channels, and maintains the the advantages of
multiple description systems for on-off channels. Thus, the multiple
description system with joint decoding achieves the best performance, from
among the three architectures considered, on both continuous-state and on-off
channels.Comment: 48 pages, 14 figure
Multiple-Description Coding by Dithered Delta-Sigma Quantization
We address the connection between the multiple-description (MD) problem and
Delta-Sigma quantization. The inherent redundancy due to oversampling in
Delta-Sigma quantization, and the simple linear-additive noise model resulting
from dithered lattice quantization, allow us to construct a symmetric and
time-invariant MD coding scheme. We show that the use of a noise shaping filter
makes it possible to trade off central distortion for side distortion.
Asymptotically as the dimension of the lattice vector quantizer and order of
the noise shaping filter approach infinity, the entropy rate of the dithered
Delta-Sigma quantization scheme approaches the symmetric two-channel MD
rate-distortion function for a memoryless Gaussian source and MSE fidelity
criterion, at any side-to-central distortion ratio and any resolution. In the
optimal scheme, the infinite-order noise shaping filter must be minimum phase
and have a piece-wise flat power spectrum with a single jump discontinuity. An
important advantage of the proposed design is that it is symmetric in rate and
distortion by construction, so the coding rates of the descriptions are
identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has
been fixed. Accepted for publication in the IEEE Transactions on Information
Theor
On the rate loss of multiple description source codes
The rate loss of a multiresolution source code (MRSC) describes the difference between the rate needed to achieve distortion D/sub i/ in resolution i and the rate-distortion function R(D/sub i/). This paper generalizes the rate loss definition to multiple description source codes (MDSCs) and bounds the MDSC rate loss for arbitrary memoryless sources. For a two-description MDSC (2DSC), the rate loss of description i with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), i=1,2, where R/sub i/ is the rate of the ith description; the joint rate loss associated with decoding the two descriptions together to achieve central distortion D/sub 0/ is measured either as L/sub 0/=R/sub 1/+R/sub 2/-R(D/sub 0/) or as L/sub 12/=L/sub 1/+L/sub 2/. We show that for any memoryless source with variance /spl sigma//sup 2/, there exists a 2DSC for that source with L/sub 1//spl les/1/2 or L/sub 2//spl les/1/2 and a) L/sub 0//spl les/1 if D/sub 0//spl les/D/sub 1/+D/sub 2/-/spl sigma//sup 2/, b) L/sub 12//spl les/1 if 1/D/sub 0//spl les/1/D/sub 1/+1/D/sub 2/-1//spl sigma//sup 2/, c) L/sub 0//spl les/L/sub G0/+1.5 and L/sub 12//spl les/L/sub G12/+1 otherwise, where L/sub G0/ and L/sub G12/ are the joint rate losses of a Gaussian source with variance /spl sigma//sup 2/
- …