9 research outputs found
n-Channel Asymmetric Entropy-Constrained Multiple-Description Lattice Vector Quantization
This paper is about the design and analysis of an index-assignment (IA) based
multiple-description coding scheme for the n-channel asymmetric case. We use
entropy constrained lattice vector quantization and restrict attention to
simple reconstruction functions, which are given by the inverse IA function
when all descriptions are received or otherwise by a weighted average of the
received descriptions. We consider smooth sources with finite differential
entropy rate and MSE fidelity criterion. As in previous designs, our
construction is based on nested lattices which are combined through a single IA
function. The results are exact under high-resolution conditions and
asymptotically as the nesting ratios of the lattices approach infinity. For any
n, the design is asymptotically optimal within the class of IA-based schemes.
Moreover, in the case of two descriptions and finite lattice vector dimensions
greater than one, the performance is strictly better than that of existing
designs. In the case of three descriptions, we show that in the limit of large
lattice vector dimensions, points on the inner bound of Pradhan et al. can be
achieved. Furthermore, for three descriptions and finite lattice vector
dimensions, we show that the IA-based approach yields, in the symmetric case, a
smaller rate loss than the recently proposed source-splitting approach.Comment: 49 pages, 4 figures. Accepted for publication in IEEE Transactions on
Information Theory, 201
Asymmetric Gaussian Multiple Descriptions and Asymmetric Multilevel Diversity Coding
We consider multiple description source coding problem with Gaussian source and mean squared error, for descriptions. We obtain an outer bound for the rate region of the problem. We also derive an inner bound for the problem based on successively refineability of the Gaussian source and multi-level diversity coding. Our gap analysis shows that the difference between two bounds is less than bits, in the worst case
The Explicit Coding Rate Region of Symmetric Multilevel Diversity Coding
It is well known that {\em superposition coding}, namely separately encoding
the independent sources, is optimal for symmetric multilevel diversity coding
(SMDC) (Yeung-Zhang 1999). However, the characterization of the coding rate
region therein involves uncountably many linear inequalities and the constant
term (i.e., the lower bound) in each inequality is given in terms of the
solution of a linear optimization problem. Thus this implicit characterization
of the coding rate region does not enable the determination of the
achievability of a given rate tuple. In this paper, we first obtain closed-form
expressions of these uncountably many inequalities. Then we identify a finite
subset of inequalities that is sufficient for characterizing the coding rate
region. This gives an explicit characterization of the coding rate region. We
further show by the symmetry of the problem that only a much smaller subset of
this finite set of inequalities needs to be verified in determining the
achievability of a given rate tuple. Yet, the cardinality of this smaller set
grows at least exponentially fast with . We also present a subset entropy
inequality, which together with our explicit characterization of the coding
rate region, is sufficient for proving the optimality of superposition coding