609 research outputs found

    Vector Quantization by Packing of Embedded Truncated Lattices

    Get PDF
    International audienceThe purpose of this paper is to introduce a new vector quantizer (VQ) for the compression of digital image sequences. Our approach unifies both efficient coding methods: a fast lattice encoding and an unbalanced tree-structured codebook design according to a distortion vs. rate tradeoff. This tree-structured lattice VQ (TSLVQ) is based on the hierarchical packing of embedded truncated lattices. So we investigate the design of the hierarchical set of truncated lattice structures which can be optimally embedded. We present the simple quantization procedure and describe the corresponding tree-structured codebook. Finally two unbalanced tree-structured codebook design algorithms based on the BFOS distortion vs. rate criterion are used

    Multiresolution vector quantization

    Get PDF
    Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes

    Combining nonlinear multiresolution system and vector quantization for still image compression

    Get PDF
    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge- preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized in the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding

    Geometry Compression of 3D Static Point Clouds based on TSPLVQ

    Get PDF
    International audienceIn this paper, we address the challenging problem of the 3D point cloud compression required to ensure efficient transmission and storage. We introduce a new hierarchical geometry representation based on adaptive Tree-Structured Point-Lattice Vector Quantization (TSPLVQ). This representation enables hierarchically structured 3D content that improves the compression performance for static point cloud. The novelty of the proposed scheme lies in adaptive selection of the optimal quantization scheme of the geometric information, that better leverage the intrinsic correlations in point cloud. Based on its adaptive and multiscale structure, two quantization schemes are dedicated to project recursively the 3D point clouds into a series of embedded truncated cubic lattices. At each step of the process, the optimal quantization scheme is selected according to a rate-distortion cost in order to achieve the best trade-off between coding rate and geometry distortion, such that the compression flexibility and performance can be greatly improved. Experimental results show the interest of the proposed multi-scale method for lossy compression of geometry

    New Directions in Subband Coding

    Get PDF
    Two very different subband coders are described. The first is a modified dynamic bit-allocation-subband coder (D-SBC) designed for variable rate coding situations and easily adaptable to noisy channel environments. It can operate at rates as low as 12 kb/s and still give good quality speech. The second coder is a 16-kb/s waveform coder, based on a combination of subband coding and vector quantization (VQ-SBC). The key feature of this coder is its short coding delay, which makes it suitable for real-time communication networks. The speech quality of both coders has been enhanced by adaptive postfiltering. The coders have been implemented on a single AT&T DSP32 signal processo

    Some new developments in image compression

    Get PDF
    This study is divided into two parts. The first part involves an investigation of near-lossless compression of digitized images using the entropy-coded DPCM method with a large number of quantization levels. Through the investigation, a new scheme that combines both lossy and lossless DPCM methods into a common framework is developed. This new scheme uses known results on the design of predictors and quantizers that incorporate properties of human visual perception. In order to enhance the compression performance of the scheme, an adaptively generated source model with multiple contexts is employed for the coding of the quantized prediction errors, rather than a memoryless model as in the conventional DPCM method. Experiments show that the scheme can provide compression in the range from 4 to 11 with a peak SNR of about 50 dB for 8-bit medical images. Also, the use of multiple contexts is found to improve compression performance by about 25% to 35%;The second part of the study is devoted to the problem of lossy image compression using tree-structured vector quantization. As a result of the study, a new design method for codebook generation is developed together with four different implementation algorithms. In the new method, an unbalanced tree-structured vector codebook is designed in a greedy fashion under the constraint of rate-distortion trade-off which can then be used to implement a variable-rate compression system. From experiments, it is found that the new method can achieve a very good rate-distortion performance while being computationally efficient. Also, due to the tree-structure of the codebook, the new method is amenable to progressive transmission applications

    Systems aspects of COBE science data compression

    Get PDF
    A general approach to compression of diverse data from large scientific projects has been developed and this paper addresses the appropriate system and scientific constraints together with the algorithm development and test strategy. This framework has been implemented for the COsmic Background Explorer spacecraft (COBE) by retrofitting the existing VAS-based data management system with high-performance compression software permitting random access to the data. Algorithms which incorporate scientific knowledge and consume relatively few system resources are preferred over ad hoc methods. COBE exceeded its planned storage by a large and growing factor and the retrieval of data significantly affects the processing, delaying the availability of data for scientific usage and software test. Embedded compression software is planned to make the project tractable by reducing the data storage volume to an acceptable level during normal processing
    • …
    corecore