12,183 research outputs found

    Parental finite state vector quantizer and vector wavelet transform-linear predictive coding.

    Get PDF
    by Lam Chi Wah.Thesis submitted in: December 1997.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 89-91).Abstract also in Chinese.Chapter Chapter 1 --- Introduction to Data Compression and Image Coding --- p.1Chapter 1.1 --- Introduction --- p.1Chapter 1.2 --- Fundamental Principle of Data Compression --- p.2Chapter 1.3 --- Some Data Compression Algorithms --- p.3Chapter 1.4 --- Image Coding Overview --- p.4Chapter 1.5 --- Image Transformation --- p.5Chapter 1.6 --- Quantization --- p.7Chapter 1.7 --- Lossless Coding --- p.8Chapter Chapter 2 --- Subband Coding and Wavelet Transform --- p.9Chapter 2.1 --- Subband Coding Principle --- p.9Chapter 2.2 --- Perfect Reconstruction --- p.11Chapter 2.3 --- Multi-Channel System --- p.13Chapter 2.4 --- Discrete Wavelet Transform --- p.13Chapter Chapter 3 --- Vector Quantization (VQ) --- p.16Chapter 3.1 --- Introduction --- p.16Chapter 3.2 --- Basic Vector Quantization Procedure --- p.17Chapter 3.3 --- Codebook Searching and the LBG Algorithm --- p.18Chapter 3.3.1 --- Codebook --- p.18Chapter 3.3.2 --- LBG Algorithm --- p.19Chapter 3.4 --- Problem of VQ and Variations of VQ --- p.21Chapter 3.4.1 --- Classified VQ (CVQ) --- p.22Chapter 3.4.2 --- Finite State VQ (FSVQ) --- p.23Chapter 3.5 --- Vector Quantization on Wavelet Coefficients --- p.24Chapter Chapter 4 --- Vector Wavelet Transform-Linear Predictor Coding --- p.26Chapter 4.1 --- Image Coding Using Wavelet Transform with Vector Quantization --- p.26Chapter 4.1.1 --- Future Standard --- p.26Chapter 4.1.2 --- Drawback of DCT --- p.27Chapter 4.1.3 --- "Wavelet Coding and VQ, the Future Trend" --- p.28Chapter 4.2 --- Mismatch between Scalar Transformation and VQ --- p.29Chapter 4.3 --- Vector Wavelet Transform (VWT) --- p.30Chapter 4.4 --- Example of Vector Wavelet Transform --- p.34Chapter 4.5 --- Vector Wavelet Transform - Linear Predictive Coding (VWT-LPC) --- p.36Chapter 4.6 --- An Example of VWT-LPC --- p.38Chapter Chapter 5 --- Vector Quantizaton with Inter-band Bit Allocation (IBBA) --- p.40Chapter 5.1 --- Bit Allocation Problem --- p.40Chapter 5.2 --- Bit Allocation for Wavelet Subband Vector Quantizer --- p.42Chapter 5.2.1 --- Multiple Codebooks --- p.42Chapter 5.2.2 --- Inter-band Bit Allocation (IBBA) --- p.42Chapter Chapter 6 --- Parental Finite State Vector Quantizers (PFSVQ) --- p.45Chapter 6.1 --- Introduction --- p.45Chapter 6.2 --- Parent-Child Relationship Between Subbands --- p.46Chapter 6.3 --- Wavelet Subband Vector Structures for VQ --- p.48Chapter 6.3.1 --- VQ on Separate Bands --- p.48Chapter 6.3.2 --- InterBand Information for Intraband Vectors --- p.49Chapter 6.3.3 --- Cross band Vector Methods --- p.50Chapter 6.4 --- Parental Finite State Vector Quantization Algorithms --- p.52Chapter 6.4.1 --- Scheme I: Parental Finite State VQ with Parent Index Equals Child Class Number --- p.52Chapter 6.4.2 --- Scheme II: Parental Finite State VQ with Parent Index Larger than Child Class Number --- p.55Chapter Chapter 7 --- Simulation Result --- p.58Chapter 7.1 --- Introduction --- p.58Chapter 7.2 --- Simulation Result of Vector Wavelet Transform (VWT) --- p.59Chapter 7.3 --- Simulation Result of Vector Wavelet Transform - Linear Predictive Coding (VWT-LPC) --- p.61Chapter 7.3.1 --- First Test --- p.61Chapter 7.3.2 --- Second Test --- p.61Chapter 7.3.3 --- Third Test --- p.61Chapter 7.4 --- Simulation Result of Vector Quantization Using Inter-band Bit Allocation (IBBA) --- p.62Chapter 7.5 --- Simulation Result of Parental Finite State Vector Quantizers (PFSVQ) --- p.63Chapter Chapter 8 --- Conclusion --- p.86REFERENCE --- p.8

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB

    Optimal modeling for complex system design

    Get PDF
    The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms

    Hashing for Similarity Search: A Survey

    Full text link
    Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space

    Multiresolution vector quantization

    Get PDF
    Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes

    Vector Quantization Video Encoder Using Hierarchical Cache Memory Scheme

    Get PDF
    A system compresses image blocks via successive hierarchical stages and motion encoders which employ caches updated by stack replacement algorithms. Initially, a background detector compares the present image block with a corresponding previously encoded image block and if similar, the background detector terminates the encoding procedure by setting a flag bit. Otherwise, the image block is decomposed into smaller present image subblocks. The smaller present image subblocks are each compared with a corresponding previously encoded image subblock of comparable size within the present image block. When a present image subblock is similar to a corresponding previously encoded image subblock, then the procedure is terminated by setting a flag bit. Alternatively, the present image subblock is forwarded to a motion encoder where it is compared with displaced image subblocks, which are formed by displacing previously encoded image subblocks by motion vectors that are stored in a cache, to derive a first distortion vector. When the first distortion vector is below a first threshold TM, the procedure is terminated and the present image subblock is encoded by setting flag bit and a cache index corresponding to the first distortion vector. Alternatively, the present image subblock is passed to a block matching encoder where it is compared with other previously encoded image subblocks to derive a second distortion vector. When the second distortion vector is below a second threshold Tm, the procedure is terminated by setting a flag bit, by generating the second distortion vector, and by updating the cache.Georgia Tech Research Corporatio

    Separable Karhunen Loeve transforms for the weighted universal transform coding algorithm

    Get PDF
    The weighted universal transform code (WUTC) is a two-stage transform code that replaces JPEG's single, non-optimal transform code with a jointly designed collection of transform codes to achieve good performance across a broader class of possible sources. Unfortunately, the performance gains of WUTC are achieved at the expense of significant increases in computational complexity and larger codes. We here present a faster, more space-efficient WUTC algorithm. The new algorithm uses separable coding instead of direct KLT. While separable coding gives performance comparable to that of WUTC, it uses only 1/8 of the floating-point multiplications and 1/32 of storage of direct KLT. Experimental results included in this work compare the performance of new separable WUTC with both the WUTC and other fast variations of that algorithm
    • …
    corecore