19 research outputs found
On the rate loss of multiple description source codes
The rate loss of a multiresolution source code (MRSC) describes the difference between the rate needed to achieve distortion D/sub i/ in resolution i and the rate-distortion function R(D/sub i/). This paper generalizes the rate loss definition to multiple description source codes (MDSCs) and bounds the MDSC rate loss for arbitrary memoryless sources. For a two-description MDSC (2DSC), the rate loss of description i with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), i=1,2, where R/sub i/ is the rate of the ith description; the joint rate loss associated with decoding the two descriptions together to achieve central distortion D/sub 0/ is measured either as L/sub 0/=R/sub 1/+R/sub 2/-R(D/sub 0/) or as L/sub 12/=L/sub 1/+L/sub 2/. We show that for any memoryless source with variance /spl sigma//sup 2/, there exists a 2DSC for that source with L/sub 1//spl les/1/2 or L/sub 2//spl les/1/2 and a) L/sub 0//spl les/1 if D/sub 0//spl les/D/sub 1/+D/sub 2/-/spl sigma//sup 2/, b) L/sub 12//spl les/1 if 1/D/sub 0//spl les/1/D/sub 1/+1/D/sub 2/-1//spl sigma//sup 2/, c) L/sub 0//spl les/L/sub G0/+1.5 and L/sub 12//spl les/L/sub G12/+1 otherwise, where L/sub G0/ and L/sub G12/ are the joint rate losses of a Gaussian source with variance /spl sigma//sup 2/
On the rate loss of multiple description source codes and additive successive refinement codes
The rate loss of a multi-resolution source code (MRSC) describes the difference between the rate needed to achieve distortion D/sub i/ in resolution i and the rate-distortion function R(D/sub i/). We generalize the rate loss definition and bound the rate losses of multiple description source codes (MDSCs) and additive MRSCs (AMRSCs). For a 2-description MDSC (2DSC), the rate loss of description i with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), i=1, 2, where R/sub i/ is the rate of the ith description; the rate loss associated with decoding the two descriptions together to achieve central distortion D/sub 0/ is measured as L/sub 0/=R/sub 1/+R/sub 2/-R(D/sub 0/) or as L/sub 12/=L/sub 1/+L/sub 2/. We show that given an arbitrary source with variance /spl sigma//sup 2/, there exists a 2DSC with L/sub 1//spl les/0.5 and (a) L/sub 0//spl les/1 if D/sub 0//spl les/D/sub 1/+D/sub 2/-/spl sigma//sup 2/, (b) L/sub 12//spl les/1 if 1/D/sub 0//spl les/1/D/sub 1/+1/D/sub 2/-1//spl sigma//sup 2/, (c) L/sub 0//spl les/L/sub G0/+1.5 and L/sub 12//spl les/L/sub G12/+1 otherwise, where L/sub G0/ and L/sub G12/ are the joint rate losses of a normal (0, /spl sigma//sup 2/) source. An AMRSC is an MRSC with the kth-resolution reconstruction equal to the sum of the first k side reproductions of an MDSC. We obtain one bound on the rate loss of an AMRSC
Multiple-Description Coding by Dithered Delta-Sigma Quantization
We address the connection between the multiple-description (MD) problem and
Delta-Sigma quantization. The inherent redundancy due to oversampling in
Delta-Sigma quantization, and the simple linear-additive noise model resulting
from dithered lattice quantization, allow us to construct a symmetric and
time-invariant MD coding scheme. We show that the use of a noise shaping filter
makes it possible to trade off central distortion for side distortion.
Asymptotically as the dimension of the lattice vector quantizer and order of
the noise shaping filter approach infinity, the entropy rate of the dithered
Delta-Sigma quantization scheme approaches the symmetric two-channel MD
rate-distortion function for a memoryless Gaussian source and MSE fidelity
criterion, at any side-to-central distortion ratio and any resolution. In the
optimal scheme, the infinite-order noise shaping filter must be minimum phase
and have a piece-wise flat power spectrum with a single jump discontinuity. An
important advantage of the proposed design is that it is symmetric in rate and
distortion by construction, so the coding rates of the descriptions are
identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has
been fixed. Accepted for publication in the IEEE Transactions on Information
Theor
Source-Channel Diversity for Parallel Channels
We consider transmitting a source across a pair of independent, non-ergodic
channels with random states (e.g., slow fading channels) so as to minimize the
average distortion. The general problem is unsolved. Hence, we focus on
comparing two commonly used source and channel encoding systems which
correspond to exploiting diversity either at the physical layer through
parallel channel coding or at the application layer through multiple
description source coding.
For on-off channel models, source coding diversity offers better performance.
For channels with a continuous range of reception quality, we show the reverse
is true. Specifically, we introduce a new figure of merit called the distortion
exponent which measures how fast the average distortion decays with SNR. For
continuous-state models such as additive white Gaussian noise channels with
multiplicative Rayleigh fading, optimal channel coding diversity at the
physical layer is more efficient than source coding diversity at the
application layer in that the former achieves a better distortion exponent.
Finally, we consider a third decoding architecture: multiple description
encoding with a joint source-channel decoding. We show that this architecture
achieves the same distortion exponent as systems with optimal channel coding
diversity for continuous-state channels, and maintains the the advantages of
multiple description systems for on-off channels. Thus, the multiple
description system with joint decoding achieves the best performance, from
among the three architectures considered, on both continuous-state and on-off
channels.Comment: 48 pages, 14 figure
Colored-Gaussian Multiple Descriptions: Spectral and Time-Domain Forms
It is well known that Shannon's rate-distortion function (RDF) in the colored
quadratic Gaussian (QG) case can be parametrized via a single Lagrangian
variable (the "water level" in the reverse water filling solution). In this
work, we show that the symmetric colored QG multiple-description (MD) RDF in
the case of two descriptions can be parametrized in the spectral domain via two
Lagrangian variables, which control the trade-off between the side distortion,
the central distortion, and the coding rate. This spectral-domain analysis is
complemented by a time-domain scheme-design approach: we show that the
symmetric colored QG MD RDF can be achieved by combining ideas of delta-sigma
modulation and differential pulse-code modulation. Specifically, two source
prediction loops, one for each description, are embedded within a common noise
shaping loop, whose parameters are explicitly found from the spectral-domain
characterization.Comment: Accepted for publications in the IEEE Transactions on Information
Theory. Title have been shortened, abstract clarified, and paper
significantly restructure
Multiple Description Quantization via Gram-Schmidt Orthogonalization
The multiple description (MD) problem has received considerable attention as
a model of information transmission over unreliable channels. A general
framework for designing efficient multiple description quantization schemes is
proposed in this paper. We provide a systematic treatment of the El Gamal-Cover
(EGC) achievable MD rate-distortion region, and show that any point in the EGC
region can be achieved via a successive quantization scheme along with
quantization splitting. For the quadratic Gaussian case, the proposed scheme
has an intrinsic connection with the Gram-Schmidt orthogonalization, which
implies that the whole Gaussian MD rate-distortion region is achievable with a
sequential dithered lattice-based quantization scheme as the dimension of the
(optimal) lattice quantizers becomes large. Moreover, this scheme is shown to
be universal for all i.i.d. smooth sources with performance no worse than that
for an i.i.d. Gaussian source with the same variance and asymptotically optimal
at high resolution. A class of low-complexity MD scalar quantizers in the
proposed general framework also is constructed and is illustrated
geometrically; the performance is analyzed in the high resolution regime, which
exhibits a noticeable improvement over the existing MD scalar quantization
schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor