51 research outputs found
n-Channel Asymmetric Multiple-Description Lattice Vector Quantization
We present analytical expressions for optimal entropy-constrained
multiple-description lattice vector quantizers which, under high-resolutions
assumptions, minimize the expected distortion for given packet-loss
probabilities. We consider the asymmetric case where packet-loss probabilities
and side entropies are allowed to be unequal and find optimal quantizers for
any number of descriptions in any dimension. We show that the normalized second
moments of the side-quantizers are given by that of an -dimensional sphere
independent of the choice of lattices. Furthermore, we show that the optimal
bit-distribution among the descriptions is not unique. In fact, within certain
limits, bits can be arbitrarily distributed.Comment: To appear in the proceedings of the 2005 IEEE International Symposium
on Information Theory, Adelaide, Australia, September 4-9, 200
n-Channel Asymmetric Entropy-Constrained Multiple-Description Lattice Vector Quantization
This paper is about the design and analysis of an index-assignment (IA) based
multiple-description coding scheme for the n-channel asymmetric case. We use
entropy constrained lattice vector quantization and restrict attention to
simple reconstruction functions, which are given by the inverse IA function
when all descriptions are received or otherwise by a weighted average of the
received descriptions. We consider smooth sources with finite differential
entropy rate and MSE fidelity criterion. As in previous designs, our
construction is based on nested lattices which are combined through a single IA
function. The results are exact under high-resolution conditions and
asymptotically as the nesting ratios of the lattices approach infinity. For any
n, the design is asymptotically optimal within the class of IA-based schemes.
Moreover, in the case of two descriptions and finite lattice vector dimensions
greater than one, the performance is strictly better than that of existing
designs. In the case of three descriptions, we show that in the limit of large
lattice vector dimensions, points on the inner bound of Pradhan et al. can be
achieved. Furthermore, for three descriptions and finite lattice vector
dimensions, we show that the IA-based approach yields, in the symmetric case, a
smaller rate loss than the recently proposed source-splitting approach.Comment: 49 pages, 4 figures. Accepted for publication in IEEE Transactions on
Information Theory, 201
Optimal Design of Multiple Description Lattice Vector Quantizers
In the design of multiple description lattice vector quantizers (MDLVQ),
index assignment plays a critical role. In addition, one also needs to choose
the Voronoi cell size of the central lattice v, the sublattice index N, and the
number of side descriptions K to minimize the expected MDLVQ distortion, given
the total entropy rate of all side descriptions Rt and description loss
probability p. In this paper we propose a linear-time MDLVQ index assignment
algorithm for any K >= 2 balanced descriptions in any dimensions, based on a
new construction of so-called K-fraction lattice. The algorithm is greedy in
nature but is proven to be asymptotically (N -> infinity) optimal for any K >=
2 balanced descriptions in any dimensions, given Rt and p. The result is
stronger when K = 2: the optimality holds for finite N as well, under some mild
conditions. For K > 2, a local adjustment algorithm is developed to augment the
greedy index assignment, and conjectured to be optimal for finite N.
Our algorithmic study also leads to better understanding of v, N and K in
optimal MDLVQ design. For K = 2 we derive, for the first time, a
non-asymptotical closed form expression of the expected distortion of optimal
MDLVQ in p, Rt, N. For K > 2, we tighten the current asymptotic formula of the
expected distortion, relating the optimal values of N and K to p and Rt more
precisely.Comment: Submitted to IEEE Trans. on Information Theory, Sep 2006 (30 pages, 7
figures
Multiple-Description Lattice Vector Quantization For Image And Video Coding Based On Coincidings Similar Sublattices Of An
Nowadays applications of multimedia communication are found everywhere. Digital communication systems deal with representation of digital data for either storage or transmission. The size of the digital data is a crucial factor for storage and error resiliency of the data is a crucial factor for transmission systems. Thus, it is required to have more efficient encoding algorithms in terms of compression and error resiliency. Multiple-description (MD) coding has been a popular choice for robust data transmission over unreliable network channels
Multiple Description Coding Using A New Bitplane-LVQ Scheme.
In this paper, a novel Bitplane-LVQ technique to compress subbands bitplane coefficients is proposed for multiple description coding (MDC) system
Three-Description Scalar And Lattice Vector Quantization Techniques For Efficient Data Transmission
In twenty-first century, it has been witness the tremendous growth of communication technology and has had a profound impact on our daily life. Throughout history, advancements in technology and communication have gone hand-in-hand, and the most recent technical developments such as the Internet and mobile devices have achieved in the development of communication to a new phase. Majority of researches who work in Multiple Description Coding (MDC) are interested with only two description coding. However, most of the practical applications necessitate more than two packets of transmission to acquire preferable quality. The goals of this work are to develop three description coding system of scalar quantizers using modified nested index assignment technique at the number of diagonals used in the index assignment of two. Furthermore, this work aims to develop three description lattice vector quantizers using designed labeling function in the four dimensional lattice 4 since it offers more lattice points as neighbours that lead the central decoder to achieve better reconstruction quality. This thesis put emphasis on exploiting three description MDC system using scalar quantizers and lattice vector quantizers. The proposed three description system consists of three encoders and seven decoders (including of one central decoder). A three dimensional modified nested index assignment is implemented in the proposed three description scalar quantization scheme. The index assignment algorithm utilizes a matrix, to indicate the mapping process in the proposed three description scalar quantization scheme. As this thesis suggests a new labeling algorithm that uses lattice 4 for three description MDC system. Projection of a tesseract in four-dimensional space of lattice 4 yields four outputs and the data are transmitted via three channels where one of the outputs is defined as time. The three description quantization system is efficient that provides low distortion and good peak signal-to-noise ratio (PSNR) reconstruction quality. The greater the number of diagonals used in the index assignment, k in MDSQ scheme, the higher quality of the central reconstruction can be accomplished. Simulation results show that the central PSNR is promoted to 34.53 dB at rate of 0.1051 bpp and 38.07 dB at 0.9346 bpp for the proposed three description with 2k= Multiple Description Scalar Quantization (MDSQ) scheme. The percentage gain for the central reconstruction quality is improved from 6.36 % to 18.97 % by the proposed three description scalar quantizer which is at 2k= compared to the renownedMDSQ schemes.Moreover, the proposed three description lattice vector quantization (3DLVQ- 4) scheme outperforms the renowned MDC schemes from 4.4 % to 11.43 %. The central reconstruction quality is promoted to 42.63 dB and the average side reconstruction quality inaugurates 32.13 dB, both at bit rate of 1.0 bpp for the proposed 3DLVQ- 4 scheme
Deep Multiple Description Coding by Learning Scalar Quantization
In this paper, we propose a deep multiple description coding framework, whose
quantizers are adaptively learned via the minimization of multiple description
compressive loss. Firstly, our framework is built upon auto-encoder networks,
which have multiple description multi-scale dilated encoder network and
multiple description decoder networks. Secondly, two entropy estimation
networks are learned to estimate the informative amounts of the quantized
tensors, which can further supervise the learning of multiple description
encoder network to represent the input image delicately. Thirdly, a pair of
scalar quantizers accompanied by two importance-indicator maps is automatically
learned in an end-to-end self-supervised way. Finally, multiple description
structural dissimilarity distance loss is imposed on multiple description
decoded images in pixel domain for diversified multiple description generations
rather than on feature tensors in feature domain, in addition to multiple
description reconstruction loss. Through testing on two commonly used datasets,
it is verified that our method is beyond several state-of-the-art multiple
description coding approaches in terms of coding efficiency.Comment: 8 pages, 4 figures. (DCC 2019: Data Compression Conference). Testing
datasets for "Deep Optimized Multiple Description Image Coding via Scalar
Quantization Learning" can be found in the website of
https://github.com/mdcnn/Deep-Multiple-Description-Codin
- …