36,005 research outputs found

    High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    Get PDF
    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance

    Sample-Parallel Execution of EBCOT in Fast Mode

    Get PDF
    JPEG 2000’s most computationally expensive building block is the Embedded Block Coder with Optimized Truncation (EBCOT). This paper evaluates how encoders targeting a parallel architecture such as a GPU can increase their throughput in use cases where very high data rates are used. The compression efficiency in the less significant bit-planes is then often poor and it is beneficial to enable the Selective Arithmetic Coding Bypass style (fast mode) in order to trade a small loss in compression efficiency for a reduction of the computational complexity. More importantly, this style exposes a more finely grained parallelism that can be exploited to execute the raw coding passes, including bit-stuffing, in a sample-parallel fashion. For a latency- or memory critical application that encodes one frame at a time, EBCOT’s tier-1 is sped up between 1.1x and 2.4x compared to an optimized GPU-based implementation. When a low GPU occupancy has already been addressed by encoding multiple frames in parallel, the throughput can still be improved by 5% for high-entropy images and 27% for low-entropy images. Best results are obtained when enabling the fast mode after the fourth significant bit-plane. For most of the test images the compression rate is within 1% of the original

    Pipelined implementation of Jpeg image compression using Hdl

    Full text link
    This thesis presents the architecture and design of a JPEG compressor for color images using VHDL. The system consists of major parts like color space converter, down sampler, 2-D DCT module, quantization, zigzag scanning and entropy coDing The color space conversion transforms the RGB colors to YCbCr color coDing The down sampling operation reduces the sampling rate of the color information (Cb and Cr). The 2-D DCT transform the pixel data from the spatial domain to the frequency domain. The quantization operation eliminates the high frequency components and the small amplitude coefficients of the co-sine expansion. Finally, the entropy coding uses run-length encoding (RLE), Huffman, variable length coding (VLC) and differential coding to decrease the number of bits used to represent the image. The JPEG compression is a lossy compression, since downsampling and quantization operations are irreversible. But the losses can be controlled in order to keep the necessary image quality; Architectures for these parts were designed and described in VHDL. The results were observed using Active-HDL simulator and the code being synthesized using xilinx ise for vertex-4 FPGA. This pipelined architecture has a minimum latency of 187 clock cycles

    Fractal Image Coding combined with Sub-band Decomposition

    Get PDF
    In order to improve the performance of fractal image coding, this paper proposes a new coding scheme which is combined with resolution decomposition by sub-band. In the proposed scheme, an input image is first decomposed to low and high resolution sub-band images. The fractal block conding with adaptive range block size is performed only for the lowest resolution sub-band image. On the other hand direct quantization and entropy coding are carried out for the other high resolution sub-band image. The residual difference between reconstructed and original lowest resolution sub-band image is also quantized and entropy coded in order to enhance the coding performance. Computer simulation experiment were carried out employing 4 sub-bands and 7 sub-bands decompositions. The results show that the proposed coding scheme gives higher SNR (Signal-to-Noise Ratio) values and better reconstructed image qualities compared to the conventional fractal block coding schem

    Full Resolution Image Compression with Recurrent Neural Networks

    Full text link
    This paper presents a set of full-resolution lossy image compression methods based on neural networks. Each of the architectures we describe can provide variable compression rates during deployment without requiring retraining of the network: each network need only be trained once. All of our architectures consist of a recurrent neural network (RNN)-based encoder and decoder, a binarizer, and a neural network for entropy coding. We compare RNN types (LSTM, associative LSTM) and introduce a new hybrid of GRU and ResNet. We also study "one-shot" versus additive reconstruction architectures and introduce a new scaled-additive framework. We compare to previous work, showing improvements of 4.3%-8.8% AUC (area under the rate-distortion curve), depending on the perceptual metric used. As far as we know, this is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding.Comment: Updated with content for CVPR and removed supplemental material to an external link for size limitation

    Information preserved guided scan pixel difference coding for medical images

    Full text link
    This paper analyzes the information content of medical images, with 3-D MRI images as an example, in terms of information entropy. The results of the analysis justify the use of Pixel Difference Coding for preserving all information contained in the original pictures, lossless coding in other words. The experimental results also indicate that the compression ratio CR=2:1 can be achieved under the lossless constraints. A pratical implementation of Pixel Difference Coding which allows interactive retrieval of local ROI (Region of Interest), while maintaining the near low bound information entropy, is discussed.Comment: 5 pages and 5 figures. Published in IEEE Wescanex proceeding

    Quality Assessment of Deep-Learning-Based Image Compression

    Get PDF
    Image compression standards rely on predictive coding, transform coding, quantization and entropy coding, in order to achieve high compression performance. Very recently, deep generative models have been used to optimize or replace some of these operations, with very promising results. However, so far no systematic and independent study of the coding performance of these algorithms has been carried out. In this paper, for the first time, we conduct a subjective evaluation of two recent deeplearning-based image compression algorithms, comparing them to JPEG 2000 and to the recent BPG image codec based on HEVC Intra. We found that compression approaches based on deep auto-encoders can achieve coding performance higher than JPEG 2000, and sometimes as good as BPG. We also show experimentally that the PSNR metric is to be avoided when evaluating the visual quality of deep-learning-based methods, as their artifacts have different characteristics from those of DCT or wavelet-based codecs. In particular, images compressed at low bitrate appear more natural than JPEG 2000 coded pictures, according to a no-reference naturalness measure. Our study indicates that deep generative models are likely to bring huge innovation into the video coding arena in the coming years
    corecore