5 research outputs found

    Spatiotemporal Video Quality Assessment Method via Multiple Feature Mappings

    Get PDF
    Progressed video quality assessment (VQA) methods aim to evaluate the perceptual quality of videos in many applications but often prompt to increase computational complexity. Problems derive from the complexity of the distorted videos that are of significant concern in the communication industry, as well as the spatial-temporal content of the two-fold (spatial and temporal) distortion. Therefore, the findings of the study indicate that the information in the spatiotemporal slice (STS) images are useful in measuring video distortion. This paper mainly focuses on developing on a full reference video quality assessment algorithm estimator that integrates several features of spatiotemporal slices (STSS) of frames to form a high-performance video quality. This research work aims to evaluate video quality by utilizing several VQA databases by the following steps: (1) we first arrange the reference and test video sequences into a spatiotemporal slice representation. A collection of spatiotemporal feature maps were computed on each reference-test video. These response features are then processed by using a Structural Similarity (SSIM) to form a local frame quality.  (2) To further enhance the quality assessment, we combine the spatial feature maps with the spatiotemporal feature maps and propose the VQA model, named multiple map similarity feature deviation (MMSFD-STS). (3) We apply a sequential pooling strategy to assemble the quality indices of frames in the video quality scoring. (4) Extensive evaluations on video quality databases show that the proposed VQA algorithm achieves better/competitive performance as compared with other state- of- the- art methods

    Quality Assessment of In-the-Wild Videos

    Full text link
    Quality assessment of in-the-wild videos is a challenging problem because of the absence of reference videos and shooting distortions. Knowledge of the human visual system can help establish methods for objective quality assessment of in-the-wild videos. In this work, we show two eminent effects of the human visual system, namely, content-dependency and temporal-memory effects, could be used for this purpose. We propose an objective no-reference video quality assessment method by integrating both effects into a deep neural network. For content-dependency, we extract features from a pre-trained image classification neural network for its inherent content-aware property. For temporal-memory effects, long-term dependencies, especially the temporal hysteresis, are integrated into the network with a gated recurrent unit and a subjectively-inspired temporal pooling layer. To validate the performance of our method, experiments are conducted on three publicly available in-the-wild video quality assessment databases: KoNViD-1k, CVD2014, and LIVE-Qualcomm, respectively. Experimental results demonstrate that our proposed method outperforms five state-of-the-art methods by a large margin, specifically, 12.39%, 15.71%, 15.45%, and 18.09% overall performance improvements over the second-best method VBLIINDS, in terms of SROCC, KROCC, PLCC and RMSE, respectively. Moreover, the ablation study verifies the crucial role of both the content-aware features and the modeling of temporal-memory effects. The PyTorch implementation of our method is released at https://github.com/lidq92/VSFA.Comment: 9 pages, 7 figures, 4 tables. ACM Multimedia 2019 camera ready. -> Update alignment formatting of Table

    Avaliação de qualidade de vídeo utilizando modelo de atenção visual baseado em saliência

    Get PDF
    Video quality assessment plays a key role in the video processing and communications applications. An ideal video quality metric shall ensure high correlation between the video distortion prediction and the perception of the Human Visual System. This work proposes the use of visual attention models with bottom-up approach based on saliencies for video qualitty assessment. Three objective metrics are proposed. The first method is a full reference metric based on the structural similarity. The second is a no reference metric based on a sigmoidal model with least squares solution using the Levenberg-Marquardt algorithm and extraction of spatial and temporal features. And, the third is analagous to the last one, but uses the characteristic Blockiness for detecting blocking distortions in the video. The bottom-up approach is used to obtain the salient maps, which are extracted using a multiscale background model based on motion detection. The experimental results show an increase of efficiency in the quality prediction of the proposed metrics using salient model in comparission to the same metrics not using these model, highlighting the no reference proposed metrics that had better results than metrics with reference to some categories of videos.A avaliação de qualidade de vídeo possui um papel fundamental no processamento de vídeo e em aplicações de comunicação. Uma métrica de qualidade de vídeo ideal deve garantir a alta correlação entre a predição da distorção do vídeo e a percepção de qualidade do Sistema Visual Humano. Este trabalho propõe o uso de modelos de atenção visual com abordagem bottom up baseados em saliências para avaliação de qualidade de vídeo. Três métricas objetivas de avaliação são propostas. O primeiro método é uma métrica com referência completa baseada na estrutura de similaridade. O segundo modelo é uma métrica sem referência baseada em uma modelagem sigmoidal com solução de mínimos quadrados que usa o algoritmo de Levenberg-Marquardt e extração de características espaço-temporais. E, a terceira métrica é análoga à segunda, porém usa a característica Blockiness na detecção de distorções de blocagem no vídeo. A abordagem bottom-up é utilizada para obter os mapas de saliências que são extraídos através de um modelo multiescala de background baseado na detecção de movimentos. Os resultados experimentais apresentam um aumento da eficiência de predição de qualidade de vídeo nas métricas que utilizam o modelo de saliência em comparação com as respectivas métricas que não usam este modelo, com destaque para as métricas sem referência propostas que apresentaram resultados melhores do que métricas com referência para algumas categorias de vídeos

    Perceptual Video Quality Assessment and Enhancement

    Get PDF
    With the rapid development of network visual communication technologies, digital video has become ubiquitous and indispensable in our everyday lives. Video acquisition, communication, and processing systems introduce various types of distortions, which may have major impact on perceived video quality by human observers. Effective and efficient objective video quality assessment (VQA) methods that can predict perceptual video quality are highly desirable in modern visual communication systems for performance evaluation, quality control and resource allocation purposes. Moreover, perceptual VQA measures may also be employed to optimize a wide variety of video processing algorithms and systems for best perceptual quality. This thesis exploits several novel ideas in the areas of video quality assessment and enhancement. Firstly, by considering a video signal as a 3D volume image, we propose a 3D structural similarity (SSIM) based full-reference (FR) VQA approach, which also incorporates local information content and local distortion-based pooling methods. Secondly, a reduced-reference (RR) VQA scheme is developed by tracing the evolvement of local phase structures over time in the complex wavelet domain. Furthermore, we propose a quality-aware video system which combines spatial and temporal quality measures with a robust video watermarking technique, such that RR-VQA can be performed without transmitting RR features via an ancillary lossless channel. Finally, a novel strategy for enhancing video denoising algorithms, namely poly-view fusion, is developed by examining a video sequence as a 3D volume image from multiple (front, side, top) views. This leads to significant and consistent gain in terms of both peak signal-to-noise ratio (PSNR) and SSIM performance, especially at high noise levels
    corecore