50 research outputs found

    Predicted and perceived quality of bit-reduced gray-scale still images

    Get PDF

    The Impact of Spatial Masking in Image Quality Meters

    Get PDF
    Compression of digital image and video leads to block-based visible distortions like blockiness. The PSNR quality metric doesn2019;t correlate well with the subjective metric as it doesn2019;t take into consideration the impact of human visual system. In this work, we study the impact of human visual system in masking the coding distortions and its effect on the accuracy of the quality meter. We have chosen blockiness which is the most common coding distortion in DCTbased JPEG or intracoded video. We have studied the role of spatial masking by applying different masking techniques on full, reduced and no reference meters. As the visibility of distortion is content dependent, the distortion needs to be masked according to the spatial activity of the image. The results show that the complexity of spatial masking may be reduced by using the reference information efficiently. For full and reduced reference meters the spatial masking hasn2019;t much importance, if the blockiness detection is accurate, while for the no reference meter spatial masking is required to compensate the absence of any required reference information

    Visual experience of 3D TV

    Get PDF

    An investigation into the requirements for an efficient image transmission system over an ATM network

    Get PDF
    This thesis looks into the problems arising in an image transmission system when transmitting over an A TM network. Two main areas were investigated: (i) an alternative coding technique to reduce the bit rate required; and (ii) concealment of errors due to cell loss, with emphasis on processing in the transform domain of DCT-based images. [Continues.

    Video Quality Assessment

    Get PDF

    NO-REFERENCE IMAGE QUALITY ASSESSMENT USING NEURAL NETWORKS

    Get PDF

    No-reference video quality assessment model based on artifact metrics for digital transmission applications

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2017.Um dos principais fatores para a redução da qualidade do conteúdo visual, em sistemas de imagem digital, são a presença de degradações introduzidas durante as etapas de processamento de sinais. Contudo, medir a qualidade de um vídeo implica em comparar direta ou indiretamente um vídeo de teste com o seu vídeo de referência. Na maioria das aplicações, os seres humanos são o meio mais confiável de estimar a qualidade de um vídeo. Embora mais confiáveis, estes métodos consomem tempo e são difíceis de incorporar em um serviço de controle de qualidade automatizado. Como alternativa, as métricas objectivas, ou seja, algoritmos, são geralmente usadas para estimar a qualidade de um vídeo automaticamente. Para desenvolver uma métrica objetiva é importante entender como as características perceptuais de um conjunto de artefatos estão relacionadas com suas forças físicas e com o incômodo percebido. Então, nós estudamos as características de diferentes tipos de artefatos comumente encontrados em vídeos comprimidos (ou seja, blocado, borrado e perda-de-pacotes) por meio de experimentos psicofísicos para medir independentemente a força e o incômodo desses artefatos, quando sozinhos ou combinados no vídeo. Nós analisamos os dados obtidos desses experimentos e propomos vários modelos de qualidade baseados nas combinações das forças perceptuais de artefatos individuais e suas interações. Inspirados pelos resultados experimentos, nós propomos uma métrica sem-referência baseada em características extraídas dos vídeos (por exemplo, informações DCT, a média da diferença absoluta entre blocos de uma imagem, variação da intensidade entre pixels vizinhos e atenção visual). Um modelo de regressão não-linear baseado em vetores de suporte (Support Vector Regression) é usado para combinar todas as características e estimar a qualidade do vídeo. Nossa métrica teve um desempenho muito melhor que as métricas de artefatos testadas e para algumas métricas com-referência (full-reference).The main causes for the reducing of visual quality in digital imaging systems are the unwanted presence of degradations introduced during processing and transmission steps. However, measuring the quality of a video implies in a direct or indirect comparison between test video and reference video. In most applications, psycho-physical experiments with human subjects are the most reliable means of determining the quality of a video. Although more reliable, these methods are time consuming and difficult to incorporate into an automated quality control service. As an alternative, objective metrics, i.e. algorithms, are generally used to estimate video quality quality automatically. To develop an objective metric, it is important understand how the perceptual characteristics of a set of artifacts are related to their physical strengths and to the perceived annoyance. Then, to study the characteristics of different types of artifacts commonly found in compressed videos (i.e. blockiness, blurriness, and packet-loss) we performed six psychophysical experiments to independently measure the strength and overall annoyance of these artifact signals when presented alone or in combination. We analyzed the data from these experiments and proposed several models for the overall annoyance based on combinations of the perceptual strengths of the individual artifact signals and their interactions. Inspired by experimental results, we proposed a no-reference video quality metric based in several features extracted from the videos (e.g. DCT information, cross-correlation of sub-sampled images, average absolute differences between block image pixels, intensity variation between neighbouring pixels, and visual attention). A non-linear regression model using a support vector (SVR) technique is used to combine all features to obtain an overall quality estimate. Our metric performed better than the tested artifact metrics and for some full-reference metrics
    corecore