477 research outputs found

    Adaptive edge-based prediction for lossless image compression

    Get PDF
    Many lossless image compression methods have been suggested with established results hard to surpass. However there are some aspects that can be considered to improve the performance further. This research focuses on two-phase prediction-encoding method, separately studying each and suggesting new techniques.;In the prediction module, proposed Edge-Based-Predictor (EBP) and Least-Squares-Edge-Based-Predictor (LS-EBP) emphasizes on image edges and make predictions accordingly. EBP is a gradient based nonlinear adaptive predictor. EBP switches between prediction-rules based on few threshold parameters automatically determined by a pre-analysis procedure, which makes a first pass. The LS-EBP also uses these parameters, but optimizes the prediction for each pre-analysis assigned edge location, thus applying least-square approach only at the edge points.;For encoding module: a novel Burrows Wheeler Transform (BWT) inspired method is suggested, which performs better than applying the BWT directly on the images. We also present a context-based adaptive error modeling and encoding scheme. When coupled with the above-mentioned prediction schemes, the result is the best-known compression performance in the genre of compression schemes with same time and space complexity

    Lossless and Lossy Minimal Redundancy Pyramidal Decomposition for Scalable Image Compression Technique

    No full text
    International audienceWe present a new scalable compression technique dealing simultaneously with both lossy and lossless image coding. An original DPCM scheme with refined context is introduced through a pyramidal decomposition adapted to the LAR (Locally Adaptive Resolution) method, which becomes by this way fully progressive. An implicit context modeling of the prediction errors, due to the low resolution image representation including variable block size structure, is then exploited to the for lossless compression purpose

    Lossless and Lossy Minimal Redundancy Pyramidal Decomposition for Scalable Image Compression Technique

    No full text
    International audienceWe present a new scalable compression technique dealing simultaneously with both lossy and lossless image coding. An original DPCM scheme with refined context is introduced through a pyramidal decomposition adapted to the LAR (Locally Adaptive Resolution) method, which becomes by this way fully progressive. An implicit context modeling of the prediction errors, due to the low resolution image representation including variable block size structure, is then exploited to the for lossless compression purpose

    Exclusive-or preprocessing and dictionary coding of continuous-tone images.

    Get PDF
    The field of lossless image compression studies the various ways to represent image data in the most compact and efficient manner possible that also allows the image to be reproduced without any loss. One of the most efficient strategies used in lossless compression is to introduce entropy reduction through decorrelation. This study focuses on using the exclusive-or logic operator in a decorrelation filter as the preprocessing phase of lossless image compression of continuous-tone images. The exclusive-or logic operator is simply and reversibly applied to continuous-tone images for the purpose of extracting differences between neighboring pixels. Implementation of the exclusive-or operator also does not introduce data expansion. Traditional as well as innovative prediction methods are included for the creation of inputs for the exclusive-or logic based decorrelation filter. The results of the filter are then encoded by a variation of the Lempel-Ziv-Welch dictionary coder. Dictionary coding is selected for the coding phase of the algorithm because it does not require the storage of code tables or probabilities and because it is lower in complexity than other popular options such as Huffman or Arithmetic coding. The first modification of the Lempel-Ziv-Welch dictionary coder is that image data can be read in a sequence that is linear, 2-dimensional, or an adaptive combination of both. The second modification of the dictionary coder is that the coder can instead include multiple, dynamically chosen dictionaries. Experiments indicate that the exclusive-or operator based decorrelation filter when combined with a modified Lempel-Ziv-Welch dictionary coder provides compression comparable to algorithms that represent the current standard in lossless compression. The proposed algorithm provides compression performance that is below the Context-Based, Adaptive, Lossless Image Compression (CALIC) algorithm by 23%, below the Low Complexity Lossless Compression for Images (LOCO-I) algorithm by 19%, and below the Portable Network Graphics implementation of the Deflate algorithm by 7%, but above the Zip implementation of the Deflate algorithm by 24%. The proposed algorithm uses the exclusive-or operator in the modeling phase and uses modified Lempel-Ziv-Welch dictionary coding in the coding phase to form a low complexity, reversible, and dynamic method of lossless image compression

    K-means based clustering and context quantization

    Get PDF

    Improvements to JPEG-LS via diagonal edge-based prediction

    Get PDF
    JPEG-LS is the latest pixel based lossless to near lossless still image coding standard introduced by the Joint Photographic Experts Group (JPEG) '. In this standard simple localized edge detection techniques are used in order to determine the predictive value of each pixel. These edge detection techniques only detect horizontal and vertical edges and the corresponding predictors have only been optimized for the accurate prediction of pixels in the locality of horizontal and/or vertical edges. As a result JPEG-LS produces large prediction enors in the locality of diagonal edges. In this paper we propose a low complexity, low cost technique that accurately detects diagonal edges and predicts the value of pixels to be encoded based on the gradients available within the standard predictive template of JPEG-LS. We provide experimental results to show that the proposed technique outperforms JPEG-LS in terms of predicted mean squared error, by a margin ofup to 8.5 1%

    Context quantization by minimum adaptive code length

    Get PDF

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems
    corecore