9 research outputs found

    AN EFFICIENT NO-REFERENCE METRIC FOR PERCEIVED BLUR

    Get PDF
    International audienceThis paper presents an efficient no-reference metric that quantifies perceived image quality induced by blur. Instead of explicitly simulating the human visual perception of blur, it calculates the local edge blur in a cost-effective way, and applies an adaptive neural network to empirically learn the highly nonlinear relationship between the local values and the overall image quality. Evaluation of the proposed metric using the LIVE blur database shows its high prediction accuracy at a largely reduced computational cost. To further validate the performance of the blur metric on its robustness against different image content, two additional quality perception experiments were conducted: one with highly textured natural images and one with images with an intentionally blurred background . Experimental results demonstrate that the proposed blur metric is promising for real-world applications both in terms of computational efficiency and practical reliability

    No-Reference JPEG image quality assessment based on Visual sensitivity

    Full text link

    NO-REFERENCE IMAGE QUALITY ASSESSMENT USING NEURAL NETWORKS

    Get PDF

    NO-REFERENCE IMAGE QUALITY ASSESSMENT USING NEURAL NETWORKS

    Get PDF

    No-reference image and video quality assessment: a classification and review of recent approaches

    Get PDF

    Métodos sem referência baseados em características espaço-temporais para avaliação objetiva de qualidade de vídeo digital

    Get PDF
    The development of no-reference video quality assessment methods is an incipient topic in the literature and it is challenging in the sense that the results obtained by the proposed method should provide the best possible correlation with the evaluations of the Human Visual System. This thesis presents three proposals for objective no-reference video quality evaluation based on spatio-temporal features. The first approach uses a sigmoidal analytical model with leastsquares solution using the Levenberg-Marquardt method. The second and third approaches use a Single-Hidden Layer Feedforward Neural Network with learning based on the Extreme Learning Machine algorithm. Furthermore, an extended version of Extreme Learning Machine algorithm was developed which looks for the best parameters of the artificial neural network iteratively, according to a simple termination criteria, whose goal is to increase the correlation between the objective and subjective scores. The experimental results using cross-validation techniques indicate that the proposed methods are correlated to the Human Visual System scores. Therefore, they are suitable for the monitoring of video quality in broadcasting systems and over IP networks, and can be implemented in devices such as set-top boxes, ultrabooks, tablets, smartphones and Wireless Display (WiDi) devices.Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)O desenvolvimento de métodos sem referência para avaliação de qualidade de vídeo é um assunto incipiente na literatura e desafiador, no sentido de que os resultados obtidos pelo método proposto devem apresentar a melhor correlação possível com a percepção do Sistema Visual Humano. Esta tese apresenta três propostas para avaliação objetiva de qualidade de vídeo sem referência baseadas em características espaço-temporais. A primeira abordagem segue um modelo analítico sigmoidal com solução de mínimos quadrados que usa o método Levenberg-Marquardt e a segunda e terceira abordagens utilizam uma rede neural artificial Single-Hidden Layer Feedforward Neural Network com aprendizado baseado no algoritmo Extreme Learning Machine. Além disso, foi desenvolvida uma versão estendida desse algoritmo que busca os melhores parâmetros da rede neural artificial de forma iterativa, segundo um simples critério de parada, cujo objetivo é aumentar a correlação entre os escores objetivos e subjetivos. Os resultados experimentais, que usam técnicas de validação cruzada, indicam que os escores dos métodos propostos apresentam alta correlação com as escores do Sistema Visual Humano. Logo, eles são adequados para o monitoramento de qualidade de vídeo em sistemas de radiodifusão e em redes IP, bem como podem ser implementados em dispositivos como decodificadores, ultrabooks, tablets, smartphones e em equipamentos Wireless Display (WiDi)

    INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS

    Get PDF
    Measurement of visual quality is crucial for various image and video processing applications. It is widely applied in image acquisition, media transmission, video compression, image/video restoration, etc. The goal of image quality assessment (QA) is to develop a computable quality metric which is able to properly evaluate image quality. The primary criterion is better QA consistency with human judgment. Computational complexity and resource limitations are also concerns in a successful QA design. Many methods have been proposed up to now. At the beginning, quality measurements were directly taken from simple distance measurements, which refer to mathematically signal fidelity, such as mean squared error or Minkowsky distance. Lately, QA was extended to color space and the Fourier domain in which images are better represented. Some existing methods also consider the adaptive ability of human vision. Unfortunately, the Video Quality Experts Group indicated that none of the more sophisticated metrics showed any great advantage over other existing metrics. This thesis proposes a general approach to the QA problem by evaluating image information entropy. An information theoretic model for the human visual system is proposed and an information theoretic solution is presented to derive the proper settings. The quality metric is validated by five subjective databases from different research labs. The key points for a successful quality metric are investigated. During the testing, our quality metric exhibits excellent consistency with the human judgments and compatibility with different databases. Other than full reference quality assessment metric, blind quality assessment metrics are also proposed. In order to predict quality without a reference image, two concepts are introduced which quantitatively describe the inter-scale dependency under a multi-resolution framework. Based on the success of the full reference quality metric, several blind quality metrics are proposed for five different types of distortions in the subjective databases. Our blind metrics outperform all existing blind metrics and also are able to deal with some distortions which have not been investigated

    Quality Assessment and Variance Reduction in Monte Carlo Rendering Algorithms

    Get PDF
    Over the past few decades much work has been focused on the area of physically based rendering which attempts to produce images that are indistinguishable from natural images such as photographs. Physically based rendering algorithms simulate the complex interactions of light with physically based material, light source, and camera models by structuring it as complex high dimensional integrals [Kaj86] which do not have a closed form solution. Stochastic processes such as Monte Carlo methods can be structured to approximate the expectation of these integrals, producing algorithms which converge to the true rendering solution as the amount of computation is increased in the limit.When a finite amount of computation is used to approximate the rendering solution, images will contain undesirable distortions in the form of noise from under-sampling in image regions with complex light interactions. An important aspect of developing algorithms in this domain is to have a means of accurately comparing and contrasting the relative performance gains between different approaches. Image Quality Assessment (IQA) measures provide a way of condensing the high dimensionality of image data to a single scalar value which can be used as a representative measure of image quality and fidelity. These measures are largely developed in the context of image datasets containing natural images (photographs) coupled with their synthetically distorted versions, and quality assessment scores given by human observers under controlled viewing conditions. Inference using these measures therefore relies on whether the synthetic distortions used to develop the IQA measures are representative of the natural distortions that will be seen in images from domain being assessed.When we consider images generated through stochastic rendering processes, the structure of visible distortions that are present in un-converged images is highly complex and spatially varying based on lighting and scene composition. In this domain the simple synthetic distortions used commonly to train and evaluate IQA measures are not representative of the complex natural distortions from the rendering process. This raises a question of how robust IQA measures are when applied to physically based rendered images.In this thesis we summarize the classical and recent works in the area of physicallybased rendering using stochastic approaches such as Monte Carlo methods. We develop a modern C++ framework wrapping MPI for managing and running code on large scale distributed computing environments. With this framework we use high performance computing to generate a dataset of Monte Carlo images. From this we provide a study on the effectiveness of modern and classical IQA measures and their robustness when evaluating images generated through stochastic rendering processes. Finally, we build on the strengths of these IQA measures and apply modern deep-learning methods to the No Reference IQA problem, where we wish to assess the quality of a rendered image without knowing its true value
    corecore