19,579 research outputs found

    Regularity scalable image coding based on wavelet singularity detection

    Get PDF
    In this paper, we propose an adaptive algorithm for scalable wavelet image coding, which is based on the general feature, the regularity, of images. In pattern recognition or computer vision, regularity of images is estimated from the oriented wavelet coefficients and quantified by the Lipschitz exponents. To estimate the Lipschitz exponents, evaluating the interscale evolution of the wavelet transform modulus sum (WTMS) over the directional cone of influence was proven to be a better approach than tracing the wavelet transform modulus maxima (WTMM). This is because the irregular sampling nature of the WTMM complicates the reconstruction process. Moreover, examples were found to show that the WTMM representation cannot uniquely characterize a signal. It implies that the reconstruction of signal from its WTMM may not be consistently stable. Furthermore, the WTMM approach requires much more computational effort. Therefore, we use the WTMS approach to estimate the regularity of images from the separable wavelet transformed coefficients. Since we do not concern about the localization issue, we allow the decimation to occur when we evaluate the interscale evolution. After the regularity is estimated, this information is utilized in our proposed adaptive regularity scalable wavelet image coding algorithm. This algorithm can be simply embedded into any wavelet image coders, so it is compatible with the existing scalable coding techniques, such as the resolution scalable and signal-to-noise ratio (SNR) scalable coding techniques, without changing the bitstream format, but provides more scalable levels with higher peak signal-to-noise ratios (PSNRs) and lower bit rates. In comparison to the other feature-based wavelet scalable coding algorithms, the proposed algorithm outperforms them in terms of visual perception, computational complexity and coding efficienc

    An Improved Image Compression Algorithm Based on Daubechies- Wavelets with Arithmetic Coding

    Get PDF
    In this paper, we present image compression techniques to utilizing the visual redundancy and investigated. To effectively define and utilize image compression context for natural image is difficult problem. Inspired by recent research in the advancements of image compression techniques, we propose Daubechies-Wavelet with arithmetic coding towards the improvement over visual quality rather than spatial wise fidelity. Image compression using Daubechies-Wavelet with arithmetic coding is quite simple and good technique of compression to produce better compression results. In this image compression technique we first apply Daubechies-Wavelet transform then 2D Walsh-Wavelet transform on each kxk where (k=2n) block of the low frequency sub band. Split all values from each transformed block kxk followed by applying arithmetic coding for image compress. Index Terms-Image Compression, Daubechies-Wavelet, Arithmetic codin

    Quaternionic Wavelets for Image Coding

    No full text
    5 pagesInternational audienceThe Quaternionic Wavelet Transform is a recent improvement of standard wavelets that has promising theoretical properties. This new transform has proved its superiority over standard wavelets in texture analysis, so we propose here to apply it in a wavelet based image coding process. The main point is the interpretation and coding of the QWT phase, which is not dealt with in the literature. At equal bitrates, our algorithm performs better visual quality than standard wavelet based method

    Image Compression using Discrete Cosine Transform & Discrete Wavelet Transform

    Get PDF
    Image Compression addresses the problem of reducing the amount of data required to represent the digital image. Compression is achieved by the removal of one or more of three basic data redundancies: (1) Coding redundancy, which is present when less than optimal (i.e. the smallest length) code words are used; (2) Interpixel redundancy, which results from correlations between the pixels of an image & (3) psycho visual redundancy which is due to data that is ignored by the human visual system (i.e. visually nonessential information). Huffman codes contain the smallest possible number of code symbols (e.g., bits) per source symbol (e.g., grey level value) subject to the constraint that the source symbols are coded one at a time. So, Huffman coding when combined with technique of reducing the image redundancies using Discrete Cosine Transform (DCT) helps in compressing the image data to a very good extent. The Discrete Cosine Transform (DCT) is an example of transform coding. The current JPEG standard uses the DCT as its basis. The DC relocates the highest energies to the upper left corner of the image. The lesser energy or information is relocated into other areas. The DCT is fast. It can be quickly calculated and is best for images with smooth edges like photos with human subjects. The DCT coefficients are all real numbers unlike the Fourier Transform. The Inverse Discrete Cosine Transform (IDCT) can be used to retrieve the image from its transform representation. The Discrete wavelet transform (DWT) has gained widespread acceptance in signal processing and image compression. Because of their inherent multi-resolution nature, wavelet-coding schemes are especially suitable for applications where scalability and tolerable degradation are important. Recently the JPEG committee has released its new image coding standard, JPEG-2000, which has been based upon DWT

    A rate-constrained adaptive quantization scheme for wavelet pyramid image coding

    Get PDF
    Conference theme: Connecting the WorldIt is well known that orthogonal wavelet transform with filters of nonlinear phase gives poor visual results in low bit rate image coding. Biorthogonal wavelet is a good substitute, which is, however essentially nonorthogonal. A greedy steepest descend algorithm is proposed to design an adaptive quantization scheme based on the actual statistics of the input image. Since the L2 norm of the quantization error is not preserved through the nonorthogonal transform, a quantization error estimation formula considering the characteristic value of the reconstruction filters is derived to incorporate the adaptive quantization scheme. Computer simulation results demonstrate significant SNR gains over standard coding technique, and comparable visual improvements.published_or_final_versio

    Self-similarity and wavelet forms for the compression of still image and video data

    Get PDF
    This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results

    A novel entropy-constrained adaptive quantization scheme for wavelet pyramid image coding

    Get PDF
    The orthogonal wavelet transform with filters of nonlinear phase gives poor visual results in low bit rate image coding. The biorthogonal wavelet is a good substitute, which is, however essentially nonorthogonal. A greedy steepest descent algorithm is proposed to design an adaptive quantization scheme based on the actual statistics of the input image. Since the L2 norm of the quantization error is not preserved through the nonorthogonal transform, a quantization error estimation formula considering the characteristic value of the reconstruction filters is derived to incorporate the adaptive quantization scheme. Computer simulation results demonstrate significant SNR gains over standard coding techniques, and comparable visual improvements.published_or_final_versio

    Peak Transform for Efficient Image Representation and Coding

    Get PDF
    Digital Object Identifier 10.1109/TIP.2007.896599In this work, we introduce a nonlinear geometric transform, called peak transform (PT), for efficient image representation and coding. The proposed PT is able to convert high-frequency signals into low-frequency ones, making them much easier to be compressed. Coupled with wavelet transform and subband decomposition, the PT is able to significantly reduce signal energy in high-frequency subbands and achieve a significant transform coding gain. This has important applications in efficient data representation and compression. To maximize the transform coding gain, we develop a dynamic programming solution for optimum PT design. Based on PT, we design an image encoder, called the PT encoder, for efficient image compression. Our extensive experimental results demonstrate that, in wavelet-based subband decomposition, the signal energy in high-frequency subbands can be reduced by up to 60% if a PT is applied. The PT image encoder outperforms state-of-the-art JPEG2000 and H.264 (INTRA) encoders by up to 2-3 dB in peak signal-to-noise ratio (PSNR), especially for images with a significant amount of high-frequency components. Our experimental results also show that the proposed PT is able to efficiently capture and preserve high-frequency image features (e.g., edges) and yields significantly improved visual quality
    corecore