2,704 research outputs found
Optimal Design of Multiple Description Lattice Vector Quantizers
In the design of multiple description lattice vector quantizers (MDLVQ),
index assignment plays a critical role. In addition, one also needs to choose
the Voronoi cell size of the central lattice v, the sublattice index N, and the
number of side descriptions K to minimize the expected MDLVQ distortion, given
the total entropy rate of all side descriptions Rt and description loss
probability p. In this paper we propose a linear-time MDLVQ index assignment
algorithm for any K >= 2 balanced descriptions in any dimensions, based on a
new construction of so-called K-fraction lattice. The algorithm is greedy in
nature but is proven to be asymptotically (N -> infinity) optimal for any K >=
2 balanced descriptions in any dimensions, given Rt and p. The result is
stronger when K = 2: the optimality holds for finite N as well, under some mild
conditions. For K > 2, a local adjustment algorithm is developed to augment the
greedy index assignment, and conjectured to be optimal for finite N.
Our algorithmic study also leads to better understanding of v, N and K in
optimal MDLVQ design. For K = 2 we derive, for the first time, a
non-asymptotical closed form expression of the expected distortion of optimal
MDLVQ in p, Rt, N. For K > 2, we tighten the current asymptotic formula of the
expected distortion, relating the optimal values of N and K to p and Rt more
precisely.Comment: Submitted to IEEE Trans. on Information Theory, Sep 2006 (30 pages, 7
figures
n-Channel Asymmetric Multiple-Description Lattice Vector Quantization
We present analytical expressions for optimal entropy-constrained
multiple-description lattice vector quantizers which, under high-resolutions
assumptions, minimize the expected distortion for given packet-loss
probabilities. We consider the asymmetric case where packet-loss probabilities
and side entropies are allowed to be unequal and find optimal quantizers for
any number of descriptions in any dimension. We show that the normalized second
moments of the side-quantizers are given by that of an -dimensional sphere
independent of the choice of lattices. Furthermore, we show that the optimal
bit-distribution among the descriptions is not unique. In fact, within certain
limits, bits can be arbitrarily distributed.Comment: To appear in the proceedings of the 2005 IEEE International Symposium
on Information Theory, Adelaide, Australia, September 4-9, 200
Multiple Description Quantization via Gram-Schmidt Orthogonalization
The multiple description (MD) problem has received considerable attention as
a model of information transmission over unreliable channels. A general
framework for designing efficient multiple description quantization schemes is
proposed in this paper. We provide a systematic treatment of the El Gamal-Cover
(EGC) achievable MD rate-distortion region, and show that any point in the EGC
region can be achieved via a successive quantization scheme along with
quantization splitting. For the quadratic Gaussian case, the proposed scheme
has an intrinsic connection with the Gram-Schmidt orthogonalization, which
implies that the whole Gaussian MD rate-distortion region is achievable with a
sequential dithered lattice-based quantization scheme as the dimension of the
(optimal) lattice quantizers becomes large. Moreover, this scheme is shown to
be universal for all i.i.d. smooth sources with performance no worse than that
for an i.i.d. Gaussian source with the same variance and asymptotically optimal
at high resolution. A class of low-complexity MD scalar quantizers in the
proposed general framework also is constructed and is illustrated
geometrically; the performance is analyzed in the high resolution regime, which
exhibits a noticeable improvement over the existing MD scalar quantization
schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor
n-Channel Asymmetric Entropy-Constrained Multiple-Description Lattice Vector Quantization
This paper is about the design and analysis of an index-assignment (IA) based
multiple-description coding scheme for the n-channel asymmetric case. We use
entropy constrained lattice vector quantization and restrict attention to
simple reconstruction functions, which are given by the inverse IA function
when all descriptions are received or otherwise by a weighted average of the
received descriptions. We consider smooth sources with finite differential
entropy rate and MSE fidelity criterion. As in previous designs, our
construction is based on nested lattices which are combined through a single IA
function. The results are exact under high-resolution conditions and
asymptotically as the nesting ratios of the lattices approach infinity. For any
n, the design is asymptotically optimal within the class of IA-based schemes.
Moreover, in the case of two descriptions and finite lattice vector dimensions
greater than one, the performance is strictly better than that of existing
designs. In the case of three descriptions, we show that in the limit of large
lattice vector dimensions, points on the inner bound of Pradhan et al. can be
achieved. Furthermore, for three descriptions and finite lattice vector
dimensions, we show that the IA-based approach yields, in the symmetric case, a
smaller rate loss than the recently proposed source-splitting approach.Comment: 49 pages, 4 figures. Accepted for publication in IEEE Transactions on
Information Theory, 201
Deep Multiple Description Coding by Learning Scalar Quantization
In this paper, we propose a deep multiple description coding framework, whose
quantizers are adaptively learned via the minimization of multiple description
compressive loss. Firstly, our framework is built upon auto-encoder networks,
which have multiple description multi-scale dilated encoder network and
multiple description decoder networks. Secondly, two entropy estimation
networks are learned to estimate the informative amounts of the quantized
tensors, which can further supervise the learning of multiple description
encoder network to represent the input image delicately. Thirdly, a pair of
scalar quantizers accompanied by two importance-indicator maps is automatically
learned in an end-to-end self-supervised way. Finally, multiple description
structural dissimilarity distance loss is imposed on multiple description
decoded images in pixel domain for diversified multiple description generations
rather than on feature tensors in feature domain, in addition to multiple
description reconstruction loss. Through testing on two commonly used datasets,
it is verified that our method is beyond several state-of-the-art multiple
description coding approaches in terms of coding efficiency.Comment: 8 pages, 4 figures. (DCC 2019: Data Compression Conference). Testing
datasets for "Deep Optimized Multiple Description Image Coding via Scalar
Quantization Learning" can be found in the website of
https://github.com/mdcnn/Deep-Multiple-Description-Codin
Multiple-Description Coding by Dithered Delta-Sigma Quantization
We address the connection between the multiple-description (MD) problem and
Delta-Sigma quantization. The inherent redundancy due to oversampling in
Delta-Sigma quantization, and the simple linear-additive noise model resulting
from dithered lattice quantization, allow us to construct a symmetric and
time-invariant MD coding scheme. We show that the use of a noise shaping filter
makes it possible to trade off central distortion for side distortion.
Asymptotically as the dimension of the lattice vector quantizer and order of
the noise shaping filter approach infinity, the entropy rate of the dithered
Delta-Sigma quantization scheme approaches the symmetric two-channel MD
rate-distortion function for a memoryless Gaussian source and MSE fidelity
criterion, at any side-to-central distortion ratio and any resolution. In the
optimal scheme, the infinite-order noise shaping filter must be minimum phase
and have a piece-wise flat power spectrum with a single jump discontinuity. An
important advantage of the proposed design is that it is symmetric in rate and
distortion by construction, so the coding rates of the descriptions are
identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has
been fixed. Accepted for publication in the IEEE Transactions on Information
Theor
Network vector quantization
We present an algorithm for designing locally optimal vector quantizers for general networks. We discuss the algorithm's implementation and compare the performance of the resulting "network vector quantizers" to traditional vector quantizers (VQs) and to rate-distortion (R-D) bounds where available. While some special cases of network codes (e.g., multiresolution (MR) and multiple description (MD) codes) have been studied in the literature, we here present a unifying approach that both includes these existing solutions as special cases and provides solutions to previously unsolved examples
- …