277 research outputs found

    Locally adaptive vector quantization: Data compression with feature preservation

    Get PDF
    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process

    Real-time video compression using DVQ and suffix trees

    Get PDF
    Video processing is a wide and varied subject area. Video compression is an important but difficult problem in video processing. Several methods and standards exist which address this problem with varying degrees of success depending on the performance measures adopted. The present research work focuses on the real-time aspect of video processing.;In particular we propose a real-time video compression algorithm based on the concept of differential vector quantization and the suffix tree. Differential vector quantization is a relatively new area that focuses on efficient compression of data. The present work integrates the compression provided by Differential vector Quantization and the speed achieved by using the suffix tree data structure to develop a new real-time video compression scheme.;Traditionally Suffix trees are used for string searching. In the present work, we exploit the unique structure of the suffix tree to represent image data on a tree as a DVQ dictionary. To support the special characteristics of natural images and video, the traditional suffix tree is extended to handle k-errors in the matching. The result is an orders of magnitude speedup in the matching process, making it possible to compress the video in real-time, without any special hardware.;Experimental results show the performance of the proposed methodology

    Real-time demonstration hardware for enhanced DPCM video compression algorithm

    Get PDF
    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home)

    Quadtree partitioning scheme of color image based

    Get PDF
    Image segmentation is an essential complementary process in digital image processing and computer vision, but mostly utilizes simple segmentation techniques, such as fixed partitioning scheme and global thresholding techniques due to their simplicity and popularity, in spite of their inefficiency. This paper introduces a new split-merge segmentation process for a quadtree scheme of colour images, based on exploiting the spatial and spectral information embedded within the bands and between bands, respectively. The results show that this technique is efficient in terms of quality of segmentation and time, which can be used in standard techniques as alternative to a fixed partitioning scheme

    Techniques for lossless image compression

    Full text link
    Popular lossless image compression techniques used today belong to the Lempel-Ziv family of encoders. These techniques are generic in nature and do not take full advantage of the two-dimensional correlation of digital image data. They process a one-dimensional stream of data replacing repetitions with smaller codes. Techniques for Lossless Image Compression introduces a new model for lossless image compression that consists of two stages: transformation and encoDing Transformation takes advantage of the correlative properties of the data, modifying it in order to maximize the use of encoding techniques. Encoding can be described as replacing data symbols that occur frequently or in repeated groups with codes that are represented in a smaller number of bits. Techniques presented in this thesis include descriptions of Lempel-Ziv encoders in use today as well as several new techniques involving the model of transformation and encoding mentioned previously. Example compression ratios achieved by each technique when applied to a sample set of gray-scale cardiac images are provided for compariSo

    Vector quantization

    Get PDF
    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts

    Design of a digital voice data compression technique for orbiter voice channels

    Get PDF
    Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters

    High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    Get PDF
    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance
    corecore