640 research outputs found

    Improving mobile color 2D-barcode JPEG image readability using DCT coefficient distributions

    Get PDF
    Two dimensional (2D) barcodes are becoming a pervasive interface for mobile devices, such as camera smartphones. Often, only monochrome 2D-barcodes are used due to their robustness in an uncontrolled operating environment of smartphones. Nonetheless, we are seeing an emerging use of color 2D-barcodes for camera smartphones. Most smartphones capture and store such 2D-barcode images in the baseline JPEG format. As a lossy compression technique, JPEG does introduce a fair amount of error in the captured 2D-barcode images. In this paper, we analyzed the Discrete Cosine Transform (DCT) coefficient distributions of generalized 2D-barcodes using colored data cells, each comprising of 4, 8 and 10 colors. Using these DCT distributions, we improved the JPEG compression of such mobile barcode images. By altering the JPEG compression parameters based on the DCT coefficient distribution of the barcode images, our improved compression scheme produces JPEG images with higher PSNR value as compared to the baseline implementation. We have also applied our improved scheme to a 10 colors 2D-barcode system; and analyzed its performance in comparison to the default and alternative JPEG schemes. We have found that our improved scheme does provide a marked improvement for the successful decoding of the 10 colors 2D-barcode system

    Confocal microscopic image sequence compression using vector quantization and 3D pyramids

    Get PDF
    The 3D pyramid compressor project at the University of Glasgow has developed a compressor for images obtained from CLSM device. The proposed method using a combination of image pyramid coder and vector quantization techniques has good performance at compressing confocal volume image data. An experiment was conducted on several kinds of CLSM data using the presented compressor compared to other well-known volume data compressors, such as MPEG-1. The results showed that the 3D pyramid compressor gave higher subjective and objective image quality of reconstructed images at the same compression ratio and presented more acceptable results when applying image processing filters on reconstructed images

    Image compression with anisotropic diffusion

    Get PDF
    Compression is an important field of digital image processing where well-engineered methods with high performance exist. Partial differential equations (PDEs), however, have not much been explored in this context so far. In our paper we introduce a novel framework for image compression that makes use of the interpolation qualities of edge-enhancing diffusion. Although this anisotropic diffusion equation with a diffusion tensor was originally proposed for image denoising, we show that it outperforms many other PDEs when sparse scattered data must be interpolated. To exploit this property for image compression, we consider an adaptive triangulation method for removing less significant pixels from the image. The remaining points serve as scattered interpolation data for the diffusion process. They can be coded in a compact way that reflects the B-tree structure of the triangulation. We supplement the coding step with a number of amendments such as error threshold adaptation, diffusion-based point selection, and specific quantisation strategies. Our experiments illustrate the usefulness of each of these modifications. They demonstrate that for high compression rates, our PDE-based approach does not only give far better results than the widely-used JPEG standard, but can even come close to the quality of the highly optimised JPEG2000 codec

    Compound document compression with model-based biased reconstruction

    Get PDF
    The usefulness of electronic document delivery and archives rests in large part on advances in compression technology. Documents can contain complex layouts with different data types, such as text and images, having different statistical characteristics. To achieve better image quality, it is important to make use of such characteristics in compression. We exploit the transform coefficient distributions for text and images. We show that the scheme in base-line JPEG does not lead to minimum mean-square error if we have models of these coefficients. Instead, we discuss an algorithm designed for this performance that involves first classifying the blocks, and then estimating the parameters to enable a biased reconstruction in the decompression value. Simulation results are shown to validate the advantages of this method. © 2004 SPIE and IS&T.published_or_final_versio

    JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

    Get PDF
    Two dimensional (2D) barcodes are becoming a pervasive interface for mobile devices, such as camera phones. Often, only monochrome 2D-barcodes are used due to their robustness in an uncontrolled operating environment of camera phones. Most camera phones capture and store such 2D-barcode images in the baseline JPEG format. As a lossy compression technique, JPEG does introduce a fair amount of error in the decoding of captured 2D-barcode images. In this paper, we introduce an improved JPEG compression scheme for such barcode images. By altering the JPEG compression parameters based on the DCT coefficient distribution of such barcode images, the improved compression scheme produces JPEG images with higher PSNR value as compared to the baseline implementation. We have also applied our improved scheme to a real 2D-barcode system - the QR Code and analyzed its performance against the baseline JPEG scheme

    Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image Compression

    Full text link
    Lossless and near-lossless image compression is of paramount importance to professional users in many technical fields, such as medicine, remote sensing, precision engineering and scientific research. But despite rapidly growing research interests in learning-based image compression, no published method offers both lossless and near-lossless modes. In this paper, we propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression. In the lossless mode, the DLPR coding system first performs lossy compression and then lossless coding of residuals. We solve the joint lossy and residual compression problem in the approach of VAEs, and add autoregressive context modeling of the residuals to enhance lossless compression performance. In the near-lossless mode, we quantize the original residuals to satisfy a given \ell_\infty error bound, and propose a scalable near-lossless compression scheme that works for variable \ell_\infty bounds instead of training multiple networks. To expedite the DLPR coding, we increase the degree of algorithm parallelization by a novel design of coding context, and accelerate the entropy coding with adaptive residual interval. Experimental results demonstrate that the DLPR coding system achieves both the state-of-the-art lossless and near-lossless image compression performance with competitive coding speed.Comment: manuscript accepted by TPAMI, source code:https://github.com/BYchao100/Deep-Lossy-Plus-Residual-Codin

    Decompression of JPEG Document Images: A Survey Paper

    Get PDF
    JPEG Decompression techniques are very useful in 3G/4G based markets, handheld devices and infrastructures. There are many challenging issues in previously proposed decompression methods, like very high computational cost, and heavy distortion in ringing and blocking artifacts which makes the image invisible. To improve the visual quality of the JPEG document images at low bit rate and at low computational cost, we are going to implement the decompression technique for JPEG document images. We first divide the JPEG document image into smooth and non-smooth blocks with the help of Discrete Cosine Transform (DCT). Then the smooth blocks (background , uniform region) are decoded in the transform domain by minimizing the Total Block Boundary Variation(TBBV). In this we propose to compute the block variation directly in the DCT domain at the super pixel level. The super pixel have size n*n, each super pixel is assigned with an average intensity value. The smooth blocks are then reconstructed by using the Newton’s method. The implementation of the smooth block decompression will be done here. The non-smooth blocks of the document image contains the text and graphics/line drawing objects. The post processing algorithm will be introduced which takes into consideration the specificities of document content. The inverse DCT is applied to represent the image in spatial domain. So the implementation of the non-smooth block decompression will be done here. Finally, we design different experimental results and analyze that our system is better than the existing. And it will show the quality improvement of decompressed JPEG document image
    corecore