4,572 research outputs found

    Interleaved S+P Pyramidal Decomposition with Refined Prediction Model

    No full text
    International audienceScalability and others functionalities such as the Region of Interest encoding become essential properties of an efficient image coding scheme. Within the framework of lossless compression techniques, S+P and CALIC represent the state-of-the-art. The proposed Interleaved S+P algorithm outperforms these method while providing the desired properties. Based on the LAR (Locally Adaptive Resolution) method, an original pyramidal decomposition combined with a DPCM scheme is elaborated. This solution uses the S-transform in such a manner that a refined prediction context is available for each estimation steps. The image coding is done in two main steps, so that the first one supplies a LAR low-resolution image of good visual quality, and the second one allows a lossless reconstruction. The method exploits an implicit context modelling, intrinsic property of our content-based quad-tree like representation

    Exclusive-or preprocessing and dictionary coding of continuous-tone images.

    Get PDF
    The field of lossless image compression studies the various ways to represent image data in the most compact and efficient manner possible that also allows the image to be reproduced without any loss. One of the most efficient strategies used in lossless compression is to introduce entropy reduction through decorrelation. This study focuses on using the exclusive-or logic operator in a decorrelation filter as the preprocessing phase of lossless image compression of continuous-tone images. The exclusive-or logic operator is simply and reversibly applied to continuous-tone images for the purpose of extracting differences between neighboring pixels. Implementation of the exclusive-or operator also does not introduce data expansion. Traditional as well as innovative prediction methods are included for the creation of inputs for the exclusive-or logic based decorrelation filter. The results of the filter are then encoded by a variation of the Lempel-Ziv-Welch dictionary coder. Dictionary coding is selected for the coding phase of the algorithm because it does not require the storage of code tables or probabilities and because it is lower in complexity than other popular options such as Huffman or Arithmetic coding. The first modification of the Lempel-Ziv-Welch dictionary coder is that image data can be read in a sequence that is linear, 2-dimensional, or an adaptive combination of both. The second modification of the dictionary coder is that the coder can instead include multiple, dynamically chosen dictionaries. Experiments indicate that the exclusive-or operator based decorrelation filter when combined with a modified Lempel-Ziv-Welch dictionary coder provides compression comparable to algorithms that represent the current standard in lossless compression. The proposed algorithm provides compression performance that is below the Context-Based, Adaptive, Lossless Image Compression (CALIC) algorithm by 23%, below the Low Complexity Lossless Compression for Images (LOCO-I) algorithm by 19%, and below the Portable Network Graphics implementation of the Deflate algorithm by 7%, but above the Zip implementation of the Deflate algorithm by 24%. The proposed algorithm uses the exclusive-or operator in the modeling phase and uses modified Lempel-Ziv-Welch dictionary coding in the coding phase to form a low complexity, reversible, and dynamic method of lossless image compression

    A separate least squares algorithm for efficient arithmetic coding in lossless image compression

    Get PDF
    The overall performance of discrete wavelet transforms for losssless image compression may be further improved by properly designing efficient entropy coders. In this paper a novel technique is proposed for the implementation of context-based adaptive arithmetic entropy coding. It is based on the prediction of the value of the current transform coefficient. The proposed algorithm employs a weighted least squares method applied separately for the HH, HL and LH bands of each level of the multiresolution structure, in order to achieve appropriate context selection for arithmetic coding. Experimental results illustrate and evaluate the performance of the proposed technique for lossless image compression

    Improved Context-Based Adaptive Binary Arithmetic Coding over H.264/AVC for Lossless Depth Map Coding

    Get PDF
    Abstract-The depth map, which represents three-dimensional (3D) information, is used to synthesize virtual views in the depth image-based rendering (DIBR) method. Since the quality of synthesized virtual views highly depends on the quality of depth map, we encode the depth map under the lossless coding mode. The original context-based adaptive binary arithmetic coding (CABAC) that was originally designed for lossy texture coding cannot provide the best coding performance for lossless depth map coding due to the statistical differences of residual data in lossy and lossless depth map coding. In this letter, we propose an enhanced CABAC coding mechanism for lossless depth map coding based on the statistics of residual data. Experimental results show that the proposed CABAC method provides approximately 4% bit saving compared to the original CABAC in H.264/AVC

    Statistical lossless compression of space imagery and general data in a reconfigurable architecture

    Get PDF

    Progressive Lossless Image Compression Using Image Decomposition and Context Quantization

    Get PDF
    Lossless image compression has many applications, for example, in medical imaging, space photograph and film industry. In this thesis, we propose an efficient lossless image compression scheme for both binary images and gray-scale images. The scheme first decomposes images into a set of progressively refined binary sequences and then uses the context-based, adaptive arithmetic coding algorithm to encode these sequences. In order to deal with the context dilution problem in arithmetic coding, we propose a Lloyd-like iterative algorithm to quantize contexts. Fixing the set of input contexts and the number of quantized contexts, our context quantization algorithm iteratively finds the optimum context mapping in the sense of minimizing the compression rate. Experimental results show that by combining image decomposition and context quantization, our scheme can achieve competitive lossless compression performance compared to the JBIG algorithm for binary images, and the CALIC algorithm for gray-scale images. In contrast to CALIC, our scheme provides the additional feature of allowing progressive transmission of gray-scale images, which is very appealing in applications such as web browsing
    • …
    corecore