831 research outputs found
Quantization as histogram segmentation: globally optimal scalar quantizer design in network systems
We propose a polynomial-time algorithm for optimal scalar quantizer design on discrete-alphabet sources. Special cases of the proposed approach yield optimal design algorithms for fixed-rate and entropy-constrained scalar quantizers, multi-resolution scalar quantizers, multiple description scalar quantizers, and Wyner-Ziv scalar quantizers. The algorithm guarantees globally optimal solutions for fixed-rate and entropy-constrained scalar quantizers and constrained optima for the other coding scenarios. We derive the algorithm by demonstrating the connection between scalar quantization, histogram segmentation, and the shortest path problem in a certain directed acyclic graph
Quantization as Histogram Segmentation: Optimal Scalar Quantizer Design in Network Systems
An algorithm for scalar quantizer design on discrete-alphabet sources is proposed. The proposed algorithm can be used to design fixed-rate and entropy-constrained conventional scalar quantizers, multiresolution scalar quantizers, multiple description scalar quantizers, and Wyner–Ziv scalar quantizers. The algorithm guarantees globally optimal solutions for conventional fixed-rate scalar quantizers and entropy-constrained scalar quantizers. For the other coding scenarios, the algorithm yields the best code among all codes that meet a given convexity constraint. In all cases, the algorithm run-time is polynomial in the size of the source alphabet. The algorithm derivation arises from a demonstration of the connection between scalar quantization, histogram segmentation, and the shortest path problem in a certain directed acyclic graph
Optimal multiple description and multiresolution scalar quantizer design
The author presents new algorithms for fixed-rate multiple description and multiresolution scalar quantizer design. The algorithms both run in time polynomial in the size of the source alphabet and guarantee globally optimal solutions. To the author's knowledge, these are the first globally optimal design algorithms for multiple description and multiresolution quantizers
Generalized multiple description vector quantization
Packet-based data communication systems suffer from packet loss under high network traffic conditions. As a result, the receiver is often left with an incomplete description of the requested data. Multiple description source coding addresses the problem of minimizing the expected distortion caused by packet loss. An equivalent problem is that of source coding for data transmission over multiple channels where each channel has some probability of breaking down. Recent work in practical multiple description coding explores the design of multiple description scalar and vector quantizers for the case of two channels or packets. This paper presents a new practical algorithm, based on a ternary tree structure, for the design of both fixed- and variable-rate multiple description vector quantizers for an arbitrary number of channels. Experimental results achieved by codes designed with this algorithm show that they perform well under a wide range of packet loss scenarios
Deep Multiple Description Coding by Learning Scalar Quantization
In this paper, we propose a deep multiple description coding framework, whose
quantizers are adaptively learned via the minimization of multiple description
compressive loss. Firstly, our framework is built upon auto-encoder networks,
which have multiple description multi-scale dilated encoder network and
multiple description decoder networks. Secondly, two entropy estimation
networks are learned to estimate the informative amounts of the quantized
tensors, which can further supervise the learning of multiple description
encoder network to represent the input image delicately. Thirdly, a pair of
scalar quantizers accompanied by two importance-indicator maps is automatically
learned in an end-to-end self-supervised way. Finally, multiple description
structural dissimilarity distance loss is imposed on multiple description
decoded images in pixel domain for diversified multiple description generations
rather than on feature tensors in feature domain, in addition to multiple
description reconstruction loss. Through testing on two commonly used datasets,
it is verified that our method is beyond several state-of-the-art multiple
description coding approaches in terms of coding efficiency.Comment: 8 pages, 4 figures. (DCC 2019: Data Compression Conference). Testing
datasets for "Deep Optimized Multiple Description Image Coding via Scalar
Quantization Learning" can be found in the website of
https://github.com/mdcnn/Deep-Multiple-Description-Codin
Multiple Description Quantization via Gram-Schmidt Orthogonalization
The multiple description (MD) problem has received considerable attention as
a model of information transmission over unreliable channels. A general
framework for designing efficient multiple description quantization schemes is
proposed in this paper. We provide a systematic treatment of the El Gamal-Cover
(EGC) achievable MD rate-distortion region, and show that any point in the EGC
region can be achieved via a successive quantization scheme along with
quantization splitting. For the quadratic Gaussian case, the proposed scheme
has an intrinsic connection with the Gram-Schmidt orthogonalization, which
implies that the whole Gaussian MD rate-distortion region is achievable with a
sequential dithered lattice-based quantization scheme as the dimension of the
(optimal) lattice quantizers becomes large. Moreover, this scheme is shown to
be universal for all i.i.d. smooth sources with performance no worse than that
for an i.i.d. Gaussian source with the same variance and asymptotically optimal
at high resolution. A class of low-complexity MD scalar quantizers in the
proposed general framework also is constructed and is illustrated
geometrically; the performance is analyzed in the high resolution regime, which
exhibits a noticeable improvement over the existing MD scalar quantization
schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor
Multiresolution vector quantization
Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes
n-Channel Asymmetric Multiple-Description Lattice Vector Quantization
We present analytical expressions for optimal entropy-constrained
multiple-description lattice vector quantizers which, under high-resolutions
assumptions, minimize the expected distortion for given packet-loss
probabilities. We consider the asymmetric case where packet-loss probabilities
and side entropies are allowed to be unequal and find optimal quantizers for
any number of descriptions in any dimension. We show that the normalized second
moments of the side-quantizers are given by that of an -dimensional sphere
independent of the choice of lattices. Furthermore, we show that the optimal
bit-distribution among the descriptions is not unique. In fact, within certain
limits, bits can be arbitrarily distributed.Comment: To appear in the proceedings of the 2005 IEEE International Symposium
on Information Theory, Adelaide, Australia, September 4-9, 200
Multiple Description Vector Quantization with Lattice Codebooks: Design and Analysis
The problem of designing a multiple description vector quantizer with lattice
codebook Lambda is considered. A general solution is given to a labeling
problem which plays a crucial role in the design of such quantizers. Numerical
performance results are obtained for quantizers based on the lattices A_2 and
Z^i, i=1,2,4,8, that make use of this labeling algorithm. The high-rate
squared-error distortions for this family of L-dimensional vector quantizers
are then analyzed for a memoryless source with probability density function p
and differential entropy h(p) < infty. For any a in (0,1) and rate pair (R,R),
it is shown that the two-channel distortion d_0 and the channel 1 (or channel
2) distortions d_s satisfy lim_{R -> infty} d_0 2^(2R(1+a)) = (1/4) G(Lambda)
2^{2h(p)} and lim_{R -> infty} d_s 2^(2R(1-a)) = G(S_L) 2^2h(p), where
G(Lambda) is the normalized second moment of a Voronoi cell of the lattice
Lambda and G(S_L) is the normalized second moment of a sphere in L dimensions.Comment: 46 pages, 14 figure
- …