218 research outputs found
A Reduced Reference Image Quality Measure Using Bessel K Forms Model for Tetrolet Coefficients
In this paper, we introduce a Reduced Reference Image Quality Assessment
(RRIQA) measure based on the natural image statistic approach. A new adaptive
transform called "Tetrolet" is applied to both reference and distorted images.
To model the marginal distribution of tetrolet coefficients Bessel K Forms
(BKF) density is proposed. Estimating the parameters of this distribution
allows to summarize the reference image with a small amount of side
information. Five distortion measures based on the BKF parameters of the
original and processed image are used to predict quality scores. A comparison
between these measures is presented showing a good consistency with human
judgment
Image quality assessment based on harmonics gain/loss information
We present an objective reduced-reference image quality assessment method based on harmonic gain/loss information through a discriminative analysis of local harmonic strength (LHS). The LHS is computed from the gradient of images, and its value represents a relative degree of the appearance of blockiness on images when it is related to energy gain within an image. Furthermore, comparison between local harmonic strength values from an original, distortion-free image and a degraded, processed, or compressed version of the image shows that the LHS can also be used to indicate other types of degradations, such as blurriness that corresponds with energy loss. Our simulations show that we can develop a single metric based on this gain/loss information and use it to rate the quality of images encoded by various encoders such as DCT-based JPEG, wavelet-based JPEG 2000, or various processed images. We show that our method can overcome some limitations of the traditional PSNR
Sparse representation based stereoscopic image quality assessment accounting for perceptual cognitive process
In this paper, we propose a sparse representation based Reduced-Reference Image Quality Assessment (RR-IQA) index for stereoscopic images from the following two perspectives: 1) Human visual system (HVS) always tries to infer the meaningful information and reduces uncertainty from the visual stimuli, and the entropy of primitive (EoP) can well describe this visual cognitive progress when perceiving natural images. 2) Ocular dominance (also known as binocularity) which represents the interaction between two eyes is quantified by the sparse representation coefficients. Inspired by previous research, the perception and understanding of an image is considered as an active inference process determined by the level of “surprise”, which can be described by EoP. Therefore, the primitives learnt from natural images can be utilized to evaluate the visual information by computing entropy. Meanwhile, considering the binocularity in stereo image quality assessment, a feasible way is proposed to characterize this binocular process according to the sparse representation coefficients of each view. Experimental results on LIVE 3D image databases and MCL database further demonstrate that the proposed algorithm achieves high consistency with subjective evaluation
A statistical reduced-reference method for color image quality assessment
Although color is a fundamental feature of human visual perception, it has
been largely unexplored in the reduced-reference (RR) image quality assessment
(IQA) schemes. In this paper, we propose a natural scene statistic (NSS)
method, which efficiently uses this information. It is based on the statistical
deviation between the steerable pyramid coefficients of the reference color
image and the degraded one. We propose and analyze the multivariate generalized
Gaussian distribution (MGGD) to model the underlying statistics. In order to
quantify the degradation, we develop and evaluate two measures based
respectively on the Geodesic distance between two MGGDs and on the closed-form
of the Kullback Leibler divergence. We performed an extensive evaluation of
both metrics in various color spaces (RGB, HSV, CIELAB and YCrCb) using the TID
2008 benchmark and the FRTV Phase I validation process. Experimental results
demonstrate the effectiveness of the proposed framework to achieve a good
consistency with human visual perception. Furthermore, the best configuration
is obtained with CIELAB color space associated to KLD deviation measure
Disentangling Image Distortions in Deep Feature Space
Previous literature suggests that perceptual similarity is an emergent
property shared across deep visual representations. Experiments conducted on a
dataset of human-judged image distortions have proven that deep features
outperform classic perceptual metrics. In this work we take a further step in
the direction of a broader understanding of such property by analyzing the
capability of deep visual representations to intrinsically characterize
different types of image distortions. To this end, we firstly generate a number
of synthetically distorted images and then we analyze the features extracted by
different layers of different Deep Neural Networks. We observe that a
dimension-reduced representation of the features extracted from a given layer
permits to efficiently separate types of distortions in the feature space.
Moreover, each network layer exhibits a different ability to separate between
different types of distortions, and this ability varies according to the
network architecture. Finally, we evaluate the exploitation of features taken
from the layer that better separates image distortions for: i)
reduced-reference image quality assessment, and ii) distortion types and
severity levels characterization on both single and multiple distortion
databases. Results achieved on both tasks suggest that deep visual
representations can be unsupervisedly employed to efficiently characterize
various image distortions
On color image quality assessment using natural image statistics
Color distortion can introduce a significant damage in visual quality
perception, however, most of existing reduced-reference quality measures are
designed for grayscale images. In this paper, we consider a basic extension of
well-known image-statistics based quality assessment measures to color images.
In order to evaluate the impact of color information on the measures
efficiency, two color spaces are investigated: RGB and CIELAB. Results of an
extensive evaluation using TID 2013 benchmark demonstrates that significant
improvement can be achieved for a great number of distortion type when the
CIELAB color representation is used
- …