175,429 research outputs found

    No-Reference Image Quality Assessment in the Spatial Domain

    Full text link

    Contrast-distorted image quality assessment based on curvelet domain features

    Get PDF
    Contrast is one of the most popular forms of distortion. Recently, the existing image quality assessment algorithms (IQAs) works focusing on distorted images by compression, noise and blurring. Reduced-reference image quality metric for contrast-changed images (RIQMC) and no reference-image quality assessment (NR-IQA) for contrast-distorted images (NR-IQA-CDI) have been created for CDI. NR-IQA-CDI showed poor performance in two out of three image databases, where the pearson correlation coefficient (PLCC) were only 0.5739 and 0.7623 in TID2013 and CSIQ database, respectively. Spatial domain features are the basis of NR-IQA-CDI architecture. Therefore, in this paper, the spatial domain features are complementary with curvelet domain features, in order to take advantage of the potent properties of the curvelet in extracting information from images such as multiscale and multidirectional. The experimental outcome rely on K-fold cross validation (K ranged 2-10) and statistical test showed that the performance of NR-IQA-CDI rely on curvelet domain features (NR-IQA-CDI-CvT) significantly surpasses those which are rely on five spatial domain features

    No-Reference Light Field Image Quality Assessment Based on Micro-Lens Image

    Full text link
    Light field image quality assessment (LF-IQA) plays a significant role due to its guidance to Light Field (LF) contents acquisition, processing and application. The LF can be represented as 4-D signal, and its quality depends on both angular consistency and spatial quality. However, few existing LF-IQA methods concentrate on effects caused by angular inconsistency. Especially, no-reference methods lack effective utilization of 2-D angular information. In this paper, we focus on measuring the 2-D angular consistency for LF-IQA. The Micro-Lens Image (MLI) refers to the angular domain of the LF image, which can simultaneously record the angular information in both horizontal and vertical directions. Since the MLI contains 2-D angular information, we propose a No-Reference Light Field image Quality assessment model based on MLI (LF-QMLI). Specifically, we first utilize Global Entropy Distribution (GED) and Uniform Local Binary Pattern descriptor (ULBP) to extract features from the MLI, and then pool them together to measure angular consistency. In addition, the information entropy of Sub-Aperture Image (SAI) is adopted to measure spatial quality. Extensive experimental results show that LF-QMLI achieves the state-of-the-art performance

    Using the Natural Scenes’ Edges for Assessing Image Quality Blindly and Efficiently

    Get PDF
    Two real blind/no-reference (NR) image quality assessment (IQA) algorithms in the spatial domain are developed. To measure image quality, the introduced approach uses an unprecedented concept for gathering a set of novel features based on edges of natural scenes. The enhanced sensitivity of the human eye to the information carried by edge and contour of an image supports this claim. The effectiveness of the proposed technique in quantifying image quality has been studied. The gathered features are formed using both Weibull distribution statistics and two sharpness functions to devise two separate NR IQA algorithms. The presented algorithms do not need training on databases of human judgments or even prior knowledge about expected distortions, so they are real NR IQA algorithms. In contrast to the most general no-reference IQA, the model used for this study is generic and has been created in such a way that it is not specified to any particular distortion type. When testing the proposed algorithms on LIVE database, experiments show that they correlate well with subjective opinion scores. They also show that the introduced methods significantly outperform the popular full-reference peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) methods. Besides they outperform the recently developed NR natural image quality evaluator (NIQE) model

    Performance Evaluation of Natural Scenes Features to create Opinion Unaware-Distortion Unaware IQA Metric

    Get PDF
    There are many challenges facing image quality assessment (IQA) task. The greatest one which has been treated by this research is the difficulty of quantifying and evaluating distorted images quality blindly with no existence of the original (reference) image or partially from it. Choosing the appropriate features plays a significant role in measuring image quality. This study evaluates the efficiency of a set of features in quantifying image quality. The features have been gathered in spatial domain using the techniques of both rich edges and sharper regions of pristine natural images. The performance efficiency of these features examined through comparing them with both features gathered from reference and distorted images. These techniques employed to build two IQA metrics. Results clearly show the proposed pristine natural features competes reference features in assessing the distorted image quality. This proves the validity of these features in creating a robust metrics for evaluating distorted images. When testing the proposed metrics on LIVE database, experiment results show extracting features by means of rich edges is better than extracting it using sharper regions when assess the prediction monotonicity and applying the prediction accuracy evaluation. Besides they show the average outcome of the two techniques not only competes the popular full-reference peak signal-to-noise ratio (PSNR), the structural similarity (SSIM), and the developed NR natural image quality evaluator (NIQE) model but also outperform them

    Nonparametric Quality Assessment of Natural Images

    Get PDF
    In this article, the authors explore an alternative way to perform no-reference image quality assessment (NR-IQA). Following a feature extraction stage in which spatial domain statistics are utilized as features, a two-stage nonparametric NR-IQA framework is proposed. This approach requires no training phase, and it enables prediction of the image distortion type as well as local regions' quality, which is not available in most current algorithms. Experimental results on IQA databases show that the proposed framework achieves high correlation to human perception of image quality and delivers competitive performance to state-of-the-art NR-IQA algorithms

    Nonparametric Quality Assessment Of Natural Images

    Get PDF
    In this article,the authors explore an alternative way to perform no-reference image quality assessment (NR-IQA). Following a feature extraction stage in which spatial domain statistics are utilized as features,a two-stage nonparametric NR-IQA framework is proposed.This approach requires no training phase,and it enables prediction of the image distortion type as well as local regions' quality, which is not available in most current algorithms. Experimental results on IQA databases show that the proposed framework achieves high correlation to human perception of image quality and delivers competitive performance to state-of-the-art NR-IQA algorithms

    FOVQA: Blind Foveated Video Quality Assessment

    Full text link
    Previous blind or No Reference (NR) video quality assessment (VQA) models largely rely on features drawn from natural scene statistics (NSS), but under the assumption that the image statistics are stationary in the spatial domain. Several of these models are quite successful on standard pictures. However, in Virtual Reality (VR) applications, foveated video compression is regaining attention, and the concept of space-variant quality assessment is of interest, given the availability of increasingly high spatial and temporal resolution contents and practical ways of measuring gaze direction. Distortions from foveated video compression increase with increased eccentricity, implying that the natural scene statistics are space-variant. Towards advancing the development of foveated compression / streaming algorithms, we have devised a no-reference (NR) foveated video quality assessment model, called FOVQA, which is based on new models of space-variant natural scene statistics (NSS) and natural video statistics (NVS). Specifically, we deploy a space-variant generalized Gaussian distribution (SV-GGD) model and a space-variant asynchronous generalized Gaussian distribution (SV-AGGD) model of mean subtracted contrast normalized (MSCN) coefficients and products of neighboring MSCN coefficients, respectively. We devise a foveated video quality predictor that extracts radial basis features, and other features that capture perceptually annoying rapid quality fall-offs. We find that FOVQA achieves state-of-the-art (SOTA) performance on the new 2D LIVE-FBT-FCVR database, as compared with other leading FIQA / VQA models. we have made our implementation of FOVQA available at: http://live.ece.utexas.edu/research/Quality/FOVQA.zip

    No-reference quality assessment of H.264/AVC encoded video

    Get PDF
    WOS:000283952100005 (Nº de Acesso Web of Science)“Prémio Científico ISCTE-IUL 2011”This paper proposes a no-reference quality assessment metric for digital video subject to H.264/advanced video coding encoding. The proposed metric comprises two main steps: coding error estimation and perceptual weighting of this error. Error estimates are computed in the transform domain, assuming that discrete cosine transform (DCT) coefficients are corrupted by quantization noise. The DCT coefficient distributions are modeled using Cauchy or Laplace probability density functions, whose parameterization is performed using the quantized coefficient data and quantization steps. Parameter estimation is based on a maximum-likelihood estimation method combined with linear prediction. The linear prediction scheme takes advantage of the correlation between parameter values at neighbor DCT spatial frequencies. As for the perceptual weighting module, it is based on a spatiotemporal contrast sensitivity function applied to the DCT domain that compensates image plane movement by considering the movements of the human eye, namely smooth pursuit, natural drift, and saccadic movements. The video related inputs for the perceptual model are the motion vectors and the frame rate, which are also extracted from the encoded video. Subjective video quality assessment tests have been carried out in order to validate the results of the metric. A set of 11 video sequences, spanning a wide range of content, have been encoded at different bitrates and the outcome was subject to quality evaluation. Results show that the quality scores computed by the proposed algorithm are well correlated with the mean opinion scores associated to the subjective assessment
    • …
    corecore