320 research outputs found

    Results on lattice vector quantization with dithering

    Get PDF
    The statistical properties of the error in uniform scalar quantization have been analyzed by a number of authors in the past, and is a well-understood topic today. The analysis has also been extended to the case of dithered quantizers, and the advantages and limitations of dithering have been studied and well documented in the literature. Lattice vector quantization is a natural extension into multiple dimensions of the uniform scalar quantization. Accordingly, there is a natural extension of the analysis of the quantization error. It is the purpose of this paper to present this extension and to elaborate on some of the new aspects that come with multiple dimensions. We show that, analogous to the one-dimensional case, the quantization error vector can be rendered independent of the input in subtractive vector-dithering. In this case, the total mean square error is a function of only the underlying lattice and there are lattices that minimize this error. We give a necessary condition on such lattices. In nonsubtractive vector dithering, we show how to render moments of the error vector independent of the input by using appropriate dither random vectors. These results can readily be applied for the case of wide sense stationary (WSS) vector random processes, by use of iid dither sequences. We consider the problem of pre- and post-filtering around a dithered lattice quantifier, and show how these filters should be designed in order to minimize the overall quantization error in the mean square sense. For the special case where the WSS vector process is obtained by blocking a WSS scalar process, the optimum prefilter matrix reduces to the blocked version of the well-known scalar half-whitening filter

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB

    A Study of trellis coded quantization for image compression

    Get PDF
    Trellis coded quantization has recently evolved as a powerful quantization technique in the world of lossy image compression. The aim of this thesis is to investigate the potential of trellis coded quantization in conjunction with two of the most popular image transforms today; the discrete cosine transform and the discrete wavelet trans form. Trellis coded quantization is compared with traditional scalar quantization. The 4-state and the 8-state trellis coded quantizers are compared in an attempt to come up with a quantifiable difference in their performances. The use of pdf-optimized quantizers for trellis coded quantization is also studied. Results for the simulations performed on two gray-scale images at an uncoded bit rate of 0.48 bits/pixel are presented by way of reconstructed images and the respective peak signal-to-noise ratios. It is evident from the results obtained that trellis coded quantization outperforms scalar quantization in both the discrete cosine transform and the discrete wavelet transform domains. The reconstructed images suggest that there does not seem to be any considerable gain in going from a 4-state to a 8-state trellis coded quantizer. Results also suggest that considerable gain can be had by employing pdf-optimized quantizers for trellis coded quantization instead of uniform quantizers

    Iterative greedy algorithm for solving the FIR paraunitary approximation problem

    Get PDF
    In this paper, a method for approximating a multi-input multi-output (MIMO) transfer function by a causal finite-impulse response (FIR) paraunitary (PU) system in a weighted least-squares sense is presented. Using a complete parameterization of FIR PU systems in terms of Householder-like building blocks, an iterative algorithm is proposed that is greedy in the sense that the observed mean-squared error at each iteration is guaranteed to not increase. For certain design problems in which there is a phase-type ambiguity in the desired response, which is formally defined in the paper, a phase feedback modification is proposed in which the phase of the FIR approximant is fed back to the desired response. With this modification in effect, it is shown that the resulting iterative algorithm not only still remains greedy, but also offers a better magnitude-type fit to the desired response. Simulation results show the usefulness and versatility of the proposed algorithm with respect to the design of principal component filter bank (PCFB)-like filter banks and the FIR PU interpolation problem. Concerning the PCFB design problem, it is shown that as the McMillan degree of the FIR PU approximant increases, the resulting filter bank behaves more and more like the infinite-order PCFB, consistent with intuition. In particular, this PCFB-like behavior is shown in terms of filter response shape, multiresolution, coding gain, noise reduction with zeroth-order Wiener filtering in the subbands, and power minimization for discrete multitone (DMT)-type transmultiplexers

    Conjoint probabilistic subband modeling

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.Includes bibliographical references (leaves 125-133).by Ashok Chhabedia Popat.Ph.D
    corecore