324 research outputs found

    Locally adaptive vector quantization: Data compression with feature preservation

    Get PDF
    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    An Efficient Coding Method for Teleconferencing Video and Confocal Microscopic Image Sequences

    Get PDF
    In this paper we propose a three-dimensional vector quantization based video coding scheme. The algorithm uses a 3D vector quantization pyramidal code book based model with adaptive code book pyramidal codebook for compression. The pyramidal code book based model helps in getting high compression in case of modest motion. The adaptive vector quantization algorithm is used to train the code book for optimal performance with time. Some of the distinguished features of our algorithm are its excellent performance due to its adaptive behavior to the video composition and excellent compression due to codebook approach. We also propose an efficient codebook based post processing technique which enables the vector quantizer to possess higher correlation preservation property. Based on the special pattern of the codebook imposed by post-processing technique, a window based fast search (WBFS) algorithm is proposed. The WBFS algorithm not only accelerates the vector quantization processing, but also results in better rate-distortion performance. The proposed approach can be used for both teleconferencing videos and to compress images obtained from confocal laser scanning microscopy (CLSM). The results show that the proposed method gave higher subjective and objective image quality of reconstructed images at a better compression ratio and presented more acceptable results when applying image processing filters such as edge detection on reconstructed images. The experimental results demonstrate that the proposed method outperforms the teleconferencing compression standards H.261 and LBG based vector quantization technique

    On Multiple Description Coding of Sources with Memory

    Full text link

    Progressive Lossless Image Compression Using Image Decomposition and Context Quantization

    Get PDF
    Lossless image compression has many applications, for example, in medical imaging, space photograph and film industry. In this thesis, we propose an efficient lossless image compression scheme for both binary images and gray-scale images. The scheme first decomposes images into a set of progressively refined binary sequences and then uses the context-based, adaptive arithmetic coding algorithm to encode these sequences. In order to deal with the context dilution problem in arithmetic coding, we propose a Lloyd-like iterative algorithm to quantize contexts. Fixing the set of input contexts and the number of quantized contexts, our context quantization algorithm iteratively finds the optimum context mapping in the sense of minimizing the compression rate. Experimental results show that by combining image decomposition and context quantization, our scheme can achieve competitive lossless compression performance compared to the JBIG algorithm for binary images, and the CALIC algorithm for gray-scale images. In contrast to CALIC, our scheme provides the additional feature of allowing progressive transmission of gray-scale images, which is very appealing in applications such as web browsing
    corecore