1,321 research outputs found
A vector quantization approach to universal noiseless coding and quantization
A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+ϵ) when the universe of sources is infinite-dimensional, under appropriate conditions
Multiresolution vector quantization
Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes
Variable dimension weighted universal vector quantization and noiseless coding
A new algorithm for variable dimension weighted universal coding is introduced. Combining the multi-codebook system of weighted universal vector quantization (WUVQ), the partitioning technique of variable dimension vector quantization, and the optimal design strategy common to both, variable dimension WUVQ allows mixture sources to be effectively carved into their component subsources, each of which can then be encoded with the codebook best matched to that source. Application of variable dimension WUVQ to a sequence of medical images provides up to 4.8 dB improvement in signal to quantization noise ratio over WUVQ and up to 11 dB improvement over a standard full-search vector quantizer followed by an entropy code. The optimal partitioning technique can likewise be applied with a collection of noiseless codes, as found in weighted universal noiseless coding (WUNC). The resulting algorithm for variable dimension WUNC is also described
Convergence-Optimal Quantizer Design of Distributed Contraction-based Iterative Algorithms with Quantized Message Passing
In this paper, we study the convergence behavior of distributed iterative
algorithms with quantized message passing. We first introduce general iterative
function evaluation algorithms for solving fixed point problems distributively.
We then analyze the convergence of the distributed algorithms, e.g. Jacobi
scheme and Gauss-Seidel scheme, under the quantized message passing. Based on
the closed-form convergence performance derived, we propose two quantizer
designs, namely the time invariant convergence-optimal quantizer (TICOQ) and
the time varying convergence-optimal quantizer (TVCOQ), to minimize the effect
of the quantization error on the convergence. We also study the tradeoff
between the convergence error and message passing overhead for both TICOQ and
TVCOQ. As an example, we apply the TICOQ and TVCOQ designs to the iterative
waterfilling algorithm of MIMO interference game.Comment: 17 pages, 9 figures, Transaction on Signal Processing, accepte
Weighted universal image compression
We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB
Adaptive Quantizers for Estimation
In this paper, adaptive estimation based on noisy quantized observations is
studied. A low complexity adaptive algorithm using a quantizer with adjustable
input gain and offset is presented. Three possible scalar models for the
parameter to be estimated are considered: constant, Wiener process and Wiener
process with deterministic drift. After showing that the algorithm is
asymptotically unbiased for estimating a constant, it is shown, in the three
cases, that the asymptotic mean squared error depends on the Fisher information
for the quantized measurements. It is also shown that the loss of performance
due to quantization depends approximately on the ratio of the Fisher
information for quantized and continuous measurements. At the end of the paper
the theoretical results are validated through simulation under two different
classes of noise, generalized Gaussian noise and Student's-t noise
Weighted universal bit allocation: optimal multiple quantization matrix coding
We introduce a two-stage bit allocation algorithm analogous to the algorithm for weighted universal vector quantization (WUVQ). The encoder uses a collection of possible bit allocations (typically in the form of a collection of quantization matrices) rather than a single bit allocation (or single quantization matrix). We describe both an encoding algorithm for achieving optimal compression using a collection of bit allocations and a technique for designing locally optimal collections of bit allocations. We demonstrate performance on a JPEG style coder using the mean squared error (MSE) distortion measure. On a sequence of medical brain scans, the algorithm achieves up to 2.5 dB improvement over a single bit allocation system, up to 5 dB improvement over a WUVQ with first- and second-stage vector dimensions equal to 16 and 4 respectively, and up to 12 dB improvement over an entropy constrained vector quantizer (ECVQ) using 4 dimensional vectors
Quadratic optimal functional quantization of stochastic processes and numerical applications
In this paper, we present an overview of the recent developments of
functional quantization of stochastic processes, with an emphasis on the
quadratic case. Functional quantization is a way to approximate a process,
viewed as a Hilbert-valued random variable, using a nearest neighbour
projection on a finite codebook. A special emphasis is made on the
computational aspects and the numerical applications, in particular the pricing
of some path-dependent European options.Comment: 41 page
A user's guide for the signal processing software for image and speech compression developed in the Communications and Signal Processing Laboratory (CSPL), version 1
A complete documentation of the software developed in the Communication and Signal Processing Laboratory (CSPL) during the period of July 1985 to March 1986 is provided. Utility programs and subroutines that were developed for a user-friendly image and speech processing environment are described. Additional programs for data compression of image and speech type signals are included. Also, programs for the zero-memory and block transform quantization in the presence of channel noise are described. Finally, several routines for simulating the perfromance of image compression algorithms are included
- …