794 research outputs found
A Generic Psychovisual Error Threshold for the Quantization Table Generation on JPEG Image Compression
The quantization process is a main part of image compression to control visual quality and the bit rate of the image output. The JPEG quantization tables are obtained from a series of psychovisual experiments to determine a visual threshold. The visual threshold is useful in handling the intensity level of the colour image that can be perceived visually by the human visual system. This paper will investigate a psychovisual error threshold at DCT frequency on the grayscale image. The DCT coefficients are incremented one by one for each frequency order. Whereby, the contribution of DCT coefficients to the error reconstruction will be a primitive pyschovisual error. At certain threshold being set on this psychovisual error, the new quantization table can be generated. The experimental results show that the new quantization table from psychovisual error threshold for DCT basis functions gives better quality image at lower average bit length of Huffman code than standard JPEG image compression
Simulated Annealing for JPEG Quantization
JPEG is one of the most widely used image formats, but in some ways remains
surprisingly unoptimized, perhaps because some natural optimizations would go
outside the standard that defines JPEG. We show how to improve JPEG compression
in a standard-compliant, backward-compatible manner, by finding improved
default quantization tables. We describe a simulated annealing technique that
has allowed us to find several quantization tables that perform better than the
industry standard, in terms of both compressed size and image fidelity.
Specifically, we derive tables that reduce the FSIM error by over 10% while
improving compression by over 20% at quality level 95 in our tests; we also
provide similar results for other quality levels. While we acknowledge our
approach can in some images lead to visible artifacts under large
magnification, we believe use of these quantization tables, or additional
tables that could be found using our methodology, would significantly reduce
JPEG file sizes with improved overall image quality.Comment: Appendix not included in arXiv version due to size restrictions. For
full paper go to:
http://www.eecs.harvard.edu/~michaelm/SimAnneal/PAPER/simulated-annealing-jpeg.pd
Recommended from our members
Visibility metrics and their applications in visually lossless image compression
Visibility metrics are image metrics that predict the probability that a human observer can detect differences between a pair of images. These metrics can provide localized information in the form of visibility maps, in which each value represents a probability of detection. An important application of the visibility metric is visually lossless image compression that aims at compressing a given image to the lowest fraction of bit per pixel while keeping the compression artifacts invisible at the same time.
In previous works, most visibility metrics were modeled based on largely simplified assumptions and mathematical models of human visual systems. This approach generally fits well into experimental data measured with simple stimuli, such as Gabor patches. However, it cannot predict complex non-linear effects, such as contrast masking in natural images, particularly well. To predict visibility of image differences accurately, we collected the largest visibility dataset under fixed viewing conditions for calibrating existing visibility metrics and proposed a deep neural network-based visibility metric. We demonstrated in our experiments that the deep neural network-based visibility metric significantly outperformed existing visibility metrics.
However, the deep neural network-based visibility metric cannot predict visibility under varying viewing conditions, such as display brightness and viewing distances that have great impacts on the visibility of distortions. To extend the deep neural network-based visibility metric to varying viewing conditions, we collected the largest visibility dataset under varying display brightness and viewing distances. We proposed incorporating white-box modules, in other words, luminance masking and viewing distance adaptation, into the black-box deep neural network, and we found that the combination of white-box modules and black-box deep neural networks could generalize our proposed visibility metric to varying viewing conditions.
To demonstrate the application of our proposed deep neural network-based visibility metric to visually lossless image compression, we collected the visually lossless image compression dataset under fixed viewing conditions and significantly improved the deep neural network-based visibility metric's accuracy of predicting visually lossless image compression threshold by pre-training the visibility metric with a synthetic dataset generated by the state-of-the-art white-box visibility metric---HDR-VDP \cite{Mantiuk2011}. In a large-scale study of 1000 images, we found that with our improved visibility metric, we can save around 60\% to 70\% bits for visually lossless image compression encoding as compared to the default visually lossless quality level of 90.
Because predicting image visibility and predicting image quality are closely related research topics, we also proposed a trained perceptually uniform transform for high dynamic range images and videos quality assessments by training a perceptual encoding function on a set of subjective quality assessment datasets. We have shown that when combining the trained perceptual encoding function with standard dynamic range image quality metrics, such as peak-signal-noise-ratio (PSNR), better performance was achieved compared to the untrained version
High-Perceptual Quality JPEG Decoding via Posterior Sampling
JPEG is arguably the most popular image coding format, achieving high
compression ratios via lossy quantization that may create visual artifacts
degradation. Numerous attempts to remove these artifacts were conceived over
the years, and common to most of these is the use of deterministic
post-processing algorithms that optimize some distortion measure (e.g., PSNR,
SSIM). In this paper we propose a different paradigm for JPEG artifact
correction: Our method is stochastic, and the objective we target is high
perceptual quality -- striving to obtain sharp, detailed and visually pleasing
reconstructed images, while being consistent with the compressed input. These
goals are achieved by training a stochastic conditional generator (conditioned
on the compressed input), accompanied by a theoretically well-founded loss
term, resulting in a sampler from the posterior distribution. Our solution
offers a diverse set of plausible and fast reconstructions for a given input
with perfect consistency. We demonstrate our scheme's unique properties and its
superiority to a variety of alternative methods on the FFHQ and ImageNet
datasets
A robust image watermarking technique based on quantization noise visibility thresholds
International audienceA tremendous amount of digital multimedia data is broadcasted daily over the internet. Since digital data can be very quickly and easily duplicated, intellectual property right protection techniques have become important and first appeared about fifty years ago (see [I.J. Cox, M.L. Miller, The First 50 Years of Electronic Watermarking, EURASIP J. Appl. Signal Process. 2 (2002) 126-132. [52]] for an extended review). Digital watermarking was born. Since its inception, many watermarking techniques have appeared, in all possible transformed spaces. However, an important lack in watermarking literature concerns the human visual system models. Several human visual system (HVS) model based watermarking techniques were designed in the late 1990's. Due to the weak robustness results, especially concerning geometrical distortions, the interest in such studies has reduced. In this paper, we intend to take advantage of recent advances in HVS models and watermarking techniques to revisit this issue. We will demonstrate that it is possible to resist too many attacks, including geometrical distortions, in HVS based watermarking algorithms. The perceptual model used here takes into account advanced features of the HVS identified from psychophysics experiments conducted in our laboratory. This model has been successfully applied in quality assessment and image coding schemes M. Carnec, P. Le Callet, D. Barba, An image quality assessment method based on perception of structural information, IEEE Internat. Conf. Image Process. 3 (2003) 185-188, N. Bekkat, A. Saadane, D. Barba, Masking effects in the quality assessment of coded images, in: SPIE Human Vision and Electronic Imaging V, 3959 (2000) 211-219. In this paper the human visual system model is used to create a perceptual mask in order to optimize the watermark strength. The optimal watermark obtained satisfies both invisibility and robustness requirements. Contrary to most watermarking schemes using advanced perceptual masks, in order to best thwart the de-synchronization problem induced by geometrical distortions, we propose here a Fourier domain embedding and detection technique optimizing the amplitude of the watermark. Finally, the robustness of the scheme obtained is assessed against all attacks provided by the Stirmark benchmark. This work proposes a new digital rights management technique using an advanced human visual system model that is able to resist various kind of attacks including many geometrical distortions
Scalable image quality assessment with 2D mel-cepstrum and machine learning approach
Cataloged from PDF version of article.Measurement of image quality is of fundamental importance to numerous image and video processing applications. Objective image quality assessment (IQA) is a two-stage process comprising of the following: (a) extraction of important information and discarding the redundant one, (b) pooling the detected features using appropriate weights. These two stages are not easy to tackle due to the complex nature of the human visual system (HVS). In this paper, we first investigate image features based on two-dimensional (20) mel-cepstrum for the purpose of IQA. It is shown that these features are effective since they can represent the structural information, which is crucial for IQA. Moreover, they are also beneficial in a reduced-reference scenario where only partial reference image information is used for quality assessment. We address the second issue by exploiting machine learning. In our opinion, the well established methodology of machine learning/pattern recognition has not been adequately used for IQA so far; we believe that it will be an effective tool for feature pooling since the required weights/parameters can be determined in a more convincing way via training with the ground truth obtained according to subjective scores. This helps to overcome the limitations of the existing pooling methods, which tend to be over simplistic and lack theoretical justification. Therefore, we propose a new metric by formulating IQA as a pattern recognition problem. Extensive experiments conducted using six publicly available image databases (totally 3211 images with diverse distortions) and one video database (with 78 video sequences) demonstrate the effectiveness and efficiency of the proposed metric, in comparison with seven relevant existing metrics. (C) 2011 Elsevier Ltd. All rights reserved
- …