631 research outputs found

    Visually Lossless Perceptual Image Coding Based on Natural- Scene Masking Models

    Get PDF
    Perceptual coding is a subdiscipline of image and video coding that uses models of human visual perception to achieve improved compression efficiency. Nearly, all image and video coders have included some perceptual coding strategies, most notably visual masking. Today, modern coders capitalize on various basic forms of masking such as the fact that distortion is harder to see in very dark and very bright regions, in regions with higher frequency content, and in temporal regions with abrupt changes. However, beyond these obvious forms of masking, there are many other masking phenomena that occur (and co-occur) when viewing natural imagery. In this chapter, we present our latest research in perceptual image coding using natural-scene masking models. We specifically discuss: (1) how to predict local distortion visibility using improved natural-scene masking models and (2) how to apply the models to high efficiency video coding (HEVC). As we will demonstrate, these techniques can offer 10–20% fewer bits than baseline HEVC in the ultra-high-quality regime

    Comparison of Interpolation Methods in Bayer CFA Image Compression Based on Structure Separation and APBT-JPEG

    Get PDF
    The color filter array (CFA) captures only one-third of the necessary color intensities and the full color image is generated from the captured data by interpolation. In recent years, the algorithm of Bayer patterned image compression based on “structure separation ” has achieved better image quality. On the basis of previous work, the algorithm based on the all phase biorthogonal transform (APBT) and interpolation is proposed in this paper. Instead of the conventional DCT-JPEG, the APBT-JPEG significantly reduces complex multiplications and makes the quantization table easier. Several kinds of interpolation methods to the decompressed image data are also discussed in this paper, including nearest neighbor interpolation, bilinear interpolation, cubic convolution interpolation and a novel interpolation method based on APIDCT. Experimental results show that the proposed algorithm outperforms the one based on “structure separation”; and the APIDCT interpolation performs close to the conventional interpolation methods and behaves better than them at high bit rates

    Image Compression by Wavelet Transform.

    Get PDF
    Digital images are widely used in computer applications. Uncompressed digital images require considerable storage capacity and transmission bandwidth. Efficient image compression solutions are becoming more critical with the recent growth of data intensive, multimedia-based web applications. This thesis studies image compression with wavelet transforms. As a necessary background, the basic concepts of graphical image storage and currently used compression algorithms are discussed. The mathematical properties of several types of wavelets, including Haar, Daubechies, and biorthogonal spline wavelets are covered and the Enbedded Zerotree Wavelet (EZW) coding algorithm is introduced. The last part of the thesis analyzes the compression results to compare the wavelet types

    DWT-CompCNN: Deep Image Classification Network for High Throughput JPEG 2000 Compressed Documents

    Full text link
    For any digital application with document images such as retrieval, the classification of document images becomes an essential stage. Conventionally for the purpose, the full versions of the documents, that is the uncompressed document images make the input dataset, which poses a threat due to the big volume required to accommodate the full versions of the documents. Therefore, it would be novel, if the same classification task could be accomplished directly (with some partial decompression) with the compressed representation of documents in order to make the whole process computationally more efficient. In this research work, a novel deep learning model, DWT CompCNN is proposed for classification of documents that are compressed using High Throughput JPEG 2000 (HTJ2K) algorithm. The proposed DWT-CompCNN comprises of five convolutional layers with filter sizes of 16, 32, 64, 128, and 256 consecutively for each increasing layer to improve learning from the wavelet coefficients extracted from the compressed images. Experiments are performed on two benchmark datasets- Tobacco-3482 and RVL-CDIP, which demonstrate that the proposed model is time and space efficient, and also achieves a better classification accuracy in compressed domain.Comment: In Springer Journal - Pattern Analysis and Applications under Minor Revisio

    The Wavelet Transform for Image Processing Applications

    Get PDF

    Deep neural networks based error level analysis for lossless image compression based forgery detection.

    Get PDF
    The proposed model is implemented in deep learning based on counterfeit feature extraction and Error Level Analysis (ELA) techniques. Error level analysis is used to improve the efficiency of distinguishing copy-move images produced by Deep Fake from the real ones. Error Level Analysis is used on images in-depth for identifying whether the photograph has long passed through changing. This Model uses CNN on the dataset of images for training and to test the dataset for identifying the forged image. Convolution neural network (CNN) can extract the counterfeit attribute and detect if images are false. In the proposed approach after the tests were carried out, it is displayed with the pie chart representation based on percentage the image is detected. It also detects different image compression ratios using the ELA process. The results of the assessments display the effectiveness of the proposed method

    A High-Performance Lossless Compression Scheme for EEG Signals Using Wavelet Transform and Neural Network Predictors

    Get PDF
    Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications
    corecore