167 research outputs found
A vector quantization approach to universal noiseless coding and quantization
A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+ϵ) when the universe of sources is infinite-dimensional, under appropriate conditions
Rates of convergence in adaptive universal vector quantization
We consider the problem of adaptive universal quantization. By adaptive quantization we mean quantization for which the delay associated with encoding the jth sample in a sequence of length n is bounded for all n>j. We demonstrate the existence of an adaptive universal quantization algorithm for which any weighted sum of the rate and the expected mean square error converges almost surely and in expectation as O(√(log log n/log n)) to the corresponding weighted sum of the rate and the distortion-rate function at that rate
One-pass adaptive universal vector quantization
The authors introduce a one-pass adaptive universal quantization technique for real, bounded alphabet, stationary sources. The algorithm is set on line without any prior knowledge of the statistics of the sources which it might encounter and asymptotically achieves ideal performance on all sources that it sees. The system consists of an encoder and a decoder. At increasing intervals, the encoder refines its codebook using knowledge about incoming data symbols. This codebook is then described to the decoder in the form of updates on the previous codebook. The accuracy to which the codebook is described increases as the number of symbols seen, and thus the accuracy to which the codebook is known, grows
Joint universal lossy coding and identification of stationary mixing sources with general alphabets
We consider the problem of joint universal variable-rate lossy coding and
identification for parametric classes of stationary -mixing sources with
general (Polish) alphabets. Compression performance is measured in terms of
Lagrangians, while identification performance is measured by the variational
distance between the true source and the estimated source. Provided that the
sources are mixing at a sufficiently fast rate and satisfy certain smoothness
and Vapnik-Chervonenkis learnability conditions, it is shown that, for bounded
metric distortions, there exist universal schemes for joint lossy compression
and identification whose Lagrangian redundancies converge to zero as as the block length tends to infinity, where is the
Vapnik-Chervonenkis dimension of a certain class of decision regions defined by
the -dimensional marginal distributions of the sources; furthermore, for
each , the decoder can identify -dimensional marginal of the active
source up to a ball of radius in variational distance,
eventually with probability one. The results are supplemented by several
examples of parametric sources satisfying the regularity conditions.Comment: 16 pages, 1 figure; accepted to IEEE Transactions on Information
Theor
Randomized Quantization and Source Coding with Constrained Output Distribution
This paper studies fixed-rate randomized vector quantization under the
constraint that the quantizer's output has a given fixed probability
distribution. A general representation of randomized quantizers that includes
the common models in the literature is introduced via appropriate mixtures of
joint probability measures on the product of the source and reproduction
alphabets. Using this representation and results from optimal transport theory,
the existence of an optimal (minimum distortion) randomized quantizer having a
given output distribution is shown under various conditions. For sources with
densities and the mean square distortion measure, it is shown that this optimum
can be attained by randomizing quantizers having convex codecells. For
stationary and memoryless source and output distributions a rate-distortion
theorem is proved, providing a single-letter expression for the optimum
distortion in the limit of large block-lengths.Comment: To appear in the IEEE Transactions on Information Theor
- …