7 research outputs found

    A document image model and estimation algorithm for optimized JPEG decompression

    Get PDF
    The JPEG standard is one of the most prevalent image compression schemes in use today. While JPEG was designed for use with natural images, it is also widely used for the encoding of raster documents. Unfortunately, JPEG\u27s characteristic blocking and ringing artifacts can severely degrade the quality of text and graphics in complex documents. We propose a JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings. The method works by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents. The method works by first segmenting the JPEG encoded document into regions corresponding to background, text, and picture content. The regions corresponding to text and background are then decoded using maximum a posteriori (MAP) estimation. Most importantly, the MAP reconstruction of the text regions uses a model which accounts for the spatial characteristics of text and graphics. Our experimental comparisons to the baseline JPEG decoding as well as to three other decoding schemes, demonstrate that our method substantially improves the quality of decoded images, both visually and as measured by PSNR

    Decompression of JPEG Document Images: A Survey Paper

    Get PDF
    JPEG Decompression techniques are very useful in 3G/4G based markets, handheld devices and infrastructures. There are many challenging issues in previously proposed decompression methods, like very high computational cost, and heavy distortion in ringing and blocking artifacts which makes the image invisible. To improve the visual quality of the JPEG document images at low bit rate and at low computational cost, we are going to implement the decompression technique for JPEG document images. We first divide the JPEG document image into smooth and non-smooth blocks with the help of Discrete Cosine Transform (DCT). Then the smooth blocks (background , uniform region) are decoded in the transform domain by minimizing the Total Block Boundary Variation(TBBV). In this we propose to compute the block variation directly in the DCT domain at the super pixel level. The super pixel have size n*n, each super pixel is assigned with an average intensity value. The smooth blocks are then reconstructed by using the Newton’s method. The implementation of the smooth block decompression will be done here. The non-smooth blocks of the document image contains the text and graphics/line drawing objects. The post processing algorithm will be introduced which takes into consideration the specificities of document content. The inverse DCT is applied to represent the image in spatial domain. So the implementation of the non-smooth block decompression will be done here. Finally, we design different experimental results and analyze that our system is better than the existing. And it will show the quality improvement of decompressed JPEG document image

    Preserving low-quality video through deep learning

    Get PDF
    Lossy video stream compression is performed to reduce the bandwidth and storage requirements. Moreover also image compression is a need that arises in many circumstances.It is often the case that older archive are stored at low resolution and with a compression rate suitable for the technology available at the time the video was created. Unfortunately, lossy compression algorithms cause artifact. Such artifacts, usually damage higher frequency details also adding noise or novel image patterns. There are several issues with this phenomenon. Low-quality images can be less pleasant to persons. Object detectors algorithms may have their performance reduced. As a result, given a perturbed version of it, we aim at removing such artifacts to recover the original image. To obtain that, one should reverse the compression process through a complicated non-linear image transformation. We propose a deep neural network able to improve image quality. We show that this model can be optimized either traditionally, directly optimizing an image similarity loss (SSIM), or using a generative adversarial approach (GAN). Our restored images have more photorealistic details with respect to traditional image enhancement networks. Our training procedure based on sub-patches is novel. Moreover, we propose novel testing protocol to evaluate restored images quantitatively. Differently from previously proposed approaches we are able to remove artifacts generated at any quality by inferring the image quality directly from data. Human evaluation and quantitative experiments in object detection show that our GAN generates images with finer consistent details and these details make a difference both for machines and humans

    A Document Image Model and Estimation Algorithm for Optimized JPEG Decompression

    No full text
    corecore