517 research outputs found

    Multiresolution vector quantization

    Get PDF
    Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes

    Vector quantization

    Get PDF
    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts

    Multi-resolution VQ: parameter meaning and choice

    Get PDF
    In multi-resolution source coding, a single code is used to give an embedded data description that may be decoded at a variety of rates. Recent work in practical multi-resolution coding treats the optimal design of fixed- and variable-rate tree-structured vector quantizers for multi-resolution coding. In that work the codes are optimized for a designer-specified priority schedule over the system rates, distortions, or slopes. The method relies on a collection of parameters, which may be difficult to choose. This paper explores the meaning and choice of the multi-resolution source coding parameters

    Coding gain in paraunitary analysis/synthesis systems

    Get PDF
    A formal proof that bit allocation results hold for the entire class of paraunitary subband coders is presented. The problem of finding an optimal paraunitary subband coder, so as to maximize the coding gain of the system, is discussed. The bit allocation problem is analyzed for the case of the paraunitary tree-structured filter banks, such as those used for generating orthonormal wavelets. The even more general case of nonuniform filter banks is also considered. In all cases it is shown that under optimal bit allocation, the variances of the errors introduced by each of the quantizers have to be equal. Expressions for coding gains for these systems are derived

    Optimal modeling for complex system design

    Get PDF
    The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms

    Quantization as Histogram Segmentation: Optimal Scalar Quantizer Design in Network Systems

    Get PDF
    An algorithm for scalar quantizer design on discrete-alphabet sources is proposed. The proposed algorithm can be used to design fixed-rate and entropy-constrained conventional scalar quantizers, multiresolution scalar quantizers, multiple description scalar quantizers, and Wyner–Ziv scalar quantizers. The algorithm guarantees globally optimal solutions for conventional fixed-rate scalar quantizers and entropy-constrained scalar quantizers. For the other coding scenarios, the algorithm yields the best code among all codes that meet a given convexity constraint. In all cases, the algorithm run-time is polynomial in the size of the source alphabet. The algorithm derivation arises from a demonstration of the connection between scalar quantization, histogram segmentation, and the shortest path problem in a certain directed acyclic graph

    Nearly Optimal Vector Quantization via Linear Programming

    Get PDF
    (c) 1992 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.We present new vector quantization algorithms based on the theory devel- oped in [LiV]. The new approach is to formulate a vector quantization problem as a 0-1 integer linear program. We rst solve its relaxed linear program by linear programming techniques. Then we transform the linear program solu- tion into a provably good solution for the vector quantization problem. These methods lead to the rst known polynomial-time full-search vector quanti- zation codebook design algorithm and tree pruning algorithm with provable worst-case performance guarantees. We also introduce the notion of pseudo- random pruned tree-structured vector quantizers. Initial experimental results on image compression are very encouraging

    Vector quantizer designs for joint compression and terrain categorization of multispectral imagery

    Get PDF
    Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis

    Improved bounds for the rate loss of multiresolution source codes

    Get PDF
    We present new bounds for the rate loss of multiresolution source codes (MRSCs). Considering an M-resolution code, the rate loss at the ith resolution with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), where R/sub i/ is the rate achievable by the MRSC at stage i. This rate loss describes the performance degradation of the MRSC compared to the best single-resolution code with the same distortion. For two-resolution source codes, there are three scenarios of particular interest: (i) when both resolutions are equally important; (ii) when the rate loss at the first resolution is 0 (L/sub 1/=0); (iii) when the rate loss at the second resolution is 0 (L/sub 2/=0). The work of Lastras and Berger (see ibid., vol.47, p.918-26, Mar. 2001) gives constant upper bounds for the rate loss of an arbitrary memoryless source in scenarios (i) and (ii) and an asymptotic bound for scenario (iii) as D/sub 2/ approaches 0. We focus on the squared error distortion measure and (a) prove that for scenario (iii) L/sub 1/<1.1610 for all D/sub 2/<0.7250; (c) tighten the Lastras-Berger bound for scenario (i) from L/sub i//spl les/1/2 to L/sub i/<0.3802, i/spl isin/{1,2}; and (d) generalize the bounds for scenarios (ii) and (iii) to M-resolution codes with M/spl ges/2. We also present upper bounds for the rate losses of additive MRSCs (AMRSCs). An AMRSC is a special MRSC where each resolution describes an incremental reproduction and the kth-resolution reconstruction equals the sum of the first k incremental reproductions. We obtain two bounds on the rate loss of AMRSCs: one primarily good for low-rate coding and another which depends on the source entropy

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB
    corecore