22 research outputs found

    2-step scalar deadzone quantization for bitplane image coding

    Get PDF
    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000

    2-Step Scalar Deadzone Quantization for Bitplane Image Coding

    Full text link

    Cell-based 2-step scalar deadzone quantization for high bit-depth hyperspectral image coding

    Get PDF
    Remote sensing images often need to be coded and/or transmitted with constrained computational resources. Among other features, such images commonly have high spatial, spectral, and bit-depth resolution, which may render difficult their handling. This letter introduces an embedded quantization scheme based on two-step scalar deadzone quantization (2SDQ) that enhances the quality of transmitted images when coded with a constrained number of bits. The proposed scheme is devised for use in JPEG2000. It is named cell-based 2SDQ since it uses cells, i.e., small sets of wavelet coefficients within the codeblocks defined by JPEG2000. Cells permit a finer discrimination of coefficients in which to apply the proposed quantizer. Experimental results indicate that the proposed scheme is especially beneficial for high bit-depth hyperspectral images

    General embedded quantization for wavelet-based lossy image coding

    Get PDF
    Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quantization scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems

    A fully embedded two-stage coder for hyperspectral near-lossless compression

    Get PDF
    This letter proposes a near-lossless coder for hyperspectral images. The coding technique is fully embedded and minimizes the distortion in the l2 norm initially and in the l∞ norm subsequently. Based on a two-stage near-lossless compression scheme, it includes a lossy and a near-lossless layer. The novelties are: the observation of the convergence of the entropy of the residuals in the original domain and in the spectral-spatial transformed domain; and an embedded near-lossless layer. These contributions enable a progressive transmission while optimising both SNR and PAE performance. The embeddedness is accomplished by bitplane encoding plus arithmetic encoding. Experimental results suggest that the proposed method yields a highly competitive coding performance for hyperspectral images, outperforming multi-component JPEG2000 for l∞ norm and pairing its performance for l2 norm, and also outperforming M-CALIC in the near-lossless case -for PAE ≥5-

    Consolidating Literature for Images Compression and Its Techniques

    Get PDF
    With the proliferation of readily available image content, image compression has become a topic of considerable importance. As, rapidly increase of digital imaging demand, storage capability aspect should be considered. Therefore, image compression refers to reducing the size of image for minimizing storage without harming the image quality. Thus, an appropriate technique is needed for image compression for saving capacity as well as not losing valuable information. This paper consolidates literature whose characteristics have focused on image compression, thresholding algorithms, quantization algorithms. Later, related research on these areas are presented

    Entropy-based evaluation of context models for wavelet-transformed images

    Get PDF
    Entropy is a measure of a message uncertainty. Among others aspects, it serves to determine the minimum coding rate that practical systems may attain. This paper defines an entropy-based measure to evaluate context models employed in wavelet-based image coding. The proposed measure is defined considering the mechanisms utilized by modern coding systems. It establishes the maximum performance achievable with each context model. This helps to determine the adequateness of the model under different coding conditions and serves to predict with high precision the coding rate achieved by practical systems. Experimental results evaluate four well-known context models using different types of images, coding rates, and transform strategies. They reveal that, under specific coding conditions, some widely-spread context models may not be as adequate as it is generally thought. The hints provided by this analysis may help to design simpler and more efficient wavelet-based image codecs
    corecore