112 research outputs found

    VIQID: a no-reference bit stream-based visual quality impairment detector

    Get PDF
    In order to ensure adequate quality towards the end users at all time, video service providers are getting more interested in monitoring their video streams. Objective video quality metrics provide a means of measuring (audio)visual quality in an automated manner. Unfortunately, most of the current existing metrics cannot be used for real-time monitoring due to their dependencies on the original video sequence. In this paper we present a new objective video quality metric which classifies packet loss as visible or invisible based on information extracted solely from the captured encoded H.264/AVC video bit stream. Our results show that the visibility of packet loss can be predicted with a high accuracy, without the need for deep packet inspection. This enables service providers to monitor quality in real-time

    No-reference bitstream-based visual quality impairment detection for high definition H.264/AVC encoded video sequences

    Get PDF
    Ensuring and maintaining adequate Quality of Experience towards end-users are key objectives for video service providers, not only for increasing customer satisfaction but also as service differentiator. However, in the case of High Definition video streaming over IP-based networks, network impairments such as packet loss can severely degrade the perceived visual quality. Several standard organizations have established a minimum set of performance objectives which should be achieved for obtaining satisfactory quality. Therefore, video service providers should continuously monitor the network and the quality of the received video streams in order to detect visual degradations. Objective video quality metrics enable automatic measurement of perceived quality. Unfortunately, the most reliable metrics require access to both the original and the received video streams which makes them inappropriate for real-time monitoring. In this article, we present a novel no-reference bitstream-based visual quality impairment detector which enables real-time detection of visual degradations caused by network impairments. By only incorporating information extracted from the encoded bitstream, network impairments are classified as visible or invisible to the end-user. Our results show that impairment visibility can be classified with a high accuracy which enables real-time validation of the existing performance objectives

    No reference quality assessment for MPEG video delivery over IP

    Get PDF

    Hybrid video quality prediction: reviewing video quality measurement for widening application scope

    Get PDF
    A tremendous number of objective video quality measurement algorithms have been developed during the last two decades. Most of them either measure a very limited aspect of the perceived video quality or they measure broad ranges of quality with limited prediction accuracy. This paper lists several perceptual artifacts that may be computationally measured in an isolated algorithm and some of the modeling approaches that have been proposed to predict the resulting quality from those algorithms. These algorithms usually have a very limited application scope but have been verified carefully. The paper continues with a review of some standardized and well-known video quality measurement algorithms that are meant for a wide range of applications, thus have a larger scope. Their individual artifacts prediction accuracy is usually lower but some of them were validated to perform sufficiently well for standardization. Several difficulties and shortcomings in developing a general purpose model with high prediction performance are identified such as a common objective quality scale or the behavior of individual indicators when confronted with stimuli that are out of their prediction scope. The paper concludes with a systematic framework approach to tackle the development of a hybrid video quality measurement in a joint research collaboration.Polish National Centre for Research and Development (NCRD) SP/I/1/77065/10, Swedish Governmental Agency for Innovation Systems (Vinnova

    No-reference video quality assessment model based on artifact metrics for digital transmission applications

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2017.Um dos principais fatores para a redução da qualidade do conteúdo visual, em sistemas de imagem digital, são a presença de degradações introduzidas durante as etapas de processamento de sinais. Contudo, medir a qualidade de um vídeo implica em comparar direta ou indiretamente um vídeo de teste com o seu vídeo de referência. Na maioria das aplicações, os seres humanos são o meio mais confiável de estimar a qualidade de um vídeo. Embora mais confiáveis, estes métodos consomem tempo e são difíceis de incorporar em um serviço de controle de qualidade automatizado. Como alternativa, as métricas objectivas, ou seja, algoritmos, são geralmente usadas para estimar a qualidade de um vídeo automaticamente. Para desenvolver uma métrica objetiva é importante entender como as características perceptuais de um conjunto de artefatos estão relacionadas com suas forças físicas e com o incômodo percebido. Então, nós estudamos as características de diferentes tipos de artefatos comumente encontrados em vídeos comprimidos (ou seja, blocado, borrado e perda-de-pacotes) por meio de experimentos psicofísicos para medir independentemente a força e o incômodo desses artefatos, quando sozinhos ou combinados no vídeo. Nós analisamos os dados obtidos desses experimentos e propomos vários modelos de qualidade baseados nas combinações das forças perceptuais de artefatos individuais e suas interações. Inspirados pelos resultados experimentos, nós propomos uma métrica sem-referência baseada em características extraídas dos vídeos (por exemplo, informações DCT, a média da diferença absoluta entre blocos de uma imagem, variação da intensidade entre pixels vizinhos e atenção visual). Um modelo de regressão não-linear baseado em vetores de suporte (Support Vector Regression) é usado para combinar todas as características e estimar a qualidade do vídeo. Nossa métrica teve um desempenho muito melhor que as métricas de artefatos testadas e para algumas métricas com-referência (full-reference).The main causes for the reducing of visual quality in digital imaging systems are the unwanted presence of degradations introduced during processing and transmission steps. However, measuring the quality of a video implies in a direct or indirect comparison between test video and reference video. In most applications, psycho-physical experiments with human subjects are the most reliable means of determining the quality of a video. Although more reliable, these methods are time consuming and difficult to incorporate into an automated quality control service. As an alternative, objective metrics, i.e. algorithms, are generally used to estimate video quality quality automatically. To develop an objective metric, it is important understand how the perceptual characteristics of a set of artifacts are related to their physical strengths and to the perceived annoyance. Then, to study the characteristics of different types of artifacts commonly found in compressed videos (i.e. blockiness, blurriness, and packet-loss) we performed six psychophysical experiments to independently measure the strength and overall annoyance of these artifact signals when presented alone or in combination. We analyzed the data from these experiments and proposed several models for the overall annoyance based on combinations of the perceptual strengths of the individual artifact signals and their interactions. Inspired by experimental results, we proposed a no-reference video quality metric based in several features extracted from the videos (e.g. DCT information, cross-correlation of sub-sampled images, average absolute differences between block image pixels, intensity variation between neighbouring pixels, and visual attention). A non-linear regression model using a support vector (SVR) technique is used to combine all features to obtain an overall quality estimate. Our metric performed better than the tested artifact metrics and for some full-reference metrics

    Study of saliency in objective video quality assessment

    Get PDF
    Reliably predicting video quality as perceived by humans remains challenging and is of high practical relevance. A significant research trend is to investigate visual saliency and its implications for video quality assessment. Fundamental problems regarding how to acquire reliable eye-tracking data for the purpose of video quality research and how saliency should be incorporated in objective video quality metrics (VQMs) are largely unsolved. In this paper, we propose a refined methodology for reliably collecting eye-tracking data, which essentially eliminates bias induced by each subject having to view multiple variations of the same scene in a conventional experiment. We performed a large-scale eye-tracking experiment that involved 160 human observers and 160 video stimuli distorted with different distortion types at various degradation levels. The measured saliency was integrated into several best known VQMs in the literature. With the assurance of the reliability of the saliency data, we thoroughly assessed the capabilities of saliency in improving the performance of VQMs, and devised a novel approach for optimal use of saliency in VQMs. We also evaluated to what extent the state-of-the-art computational saliency models can improve VQMs in comparison to the improvement achieved by using “ground truth” eye-tracking data. The eye-tracking database is made publicly available to the research community

    Error concealment-aware encoding for robust video transmission

    Get PDF
    In this paper an error concealment-aware encoding scheme is proposed to improve the quality of decoded video in broadcast environments prone to transmission errors and data loss. The proposed scheme is based on a scalable coding approach where the best error concealment (EC) methods to be used at the decoder are optimally determined at the encoder and signalled to the decoder through SEI messages. Such optimal EC modes are found by simulating transmission losses followed by a lagrangian optimisation of the signalling rate - EC distortion cost. A generalised saliency-weighted distortion is used and the residue between coded frames and their EC substitutes is encoded using a rate-controlled enhancement layer. When data loss occurs the decoder uses the signalling information is used at the decoder, in case of data loss, to improve the reconstruction quality. The simulation results show that the proposed method achieves consistent quality gains in comparison with other reference methods and previous works. Using only the EC mode signalling, i.e., without any residue transmitted in the enhancement layer, an average PSNR gain up to 2.95 dB is achieved, while using the full EC-aware scheme, i.e., including residue encoded in the enhancement layer, the proposed scheme outperforms other comparable methods, with PSNR gain up to 3.79 dB

    Optimized Visual Internet of Things in Video Processing for Video Streaming

    Get PDF
    The global expansion of the Visual Internet of Things (VIoT) has enabled various new applications during the last decade through the interconnection of a wide range of devices and sensors.Frame freezing and buffering are the major artefacts in broad area of multimedia networking applications occurring due to significant packet loss and network congestion. Numerous studies have been carried out in order to understand the impact of packet loss on QoE for a wide range of applications. This paper improves the video streaming quality by using the proposed framework Lossy Video Transmission (LVT)  for simulating the effect of network congestion on the performance of  encrypted static images sent over wireless sensor networks.The simulations are intended for analysing video quality and determining packet drop resilience during video conversations.The assessment of emerging trends in quality measurement, including picture preference, visual attention, and audio visual quality is checked. To appropriately quantify the video quality loss caused by the encoding system, various encoders compress video sequences at various data rates.Simulation results for different QoE metrics with respect to user developed videos have been demonstrated which outperforms the existing metrics

    Video Packet Priority Assignment Based On Spatio-Temporal Perceptual Importance

    Get PDF
    A novel perceptually motivated two-stage algorithm for assigning priority to video packet data to be transmitted over the internet is proposed. Priority assignment is based on temporal and spatial features that are derived from low-level vision concepts. The motivation for a two-stage design is to be able to handle different application settings. The first stage of the algorithm is computationally very efficient and can be directly used in low-delay applications with limited computational resources. The two-stage method performs exceedingly well across a variety of content and can be used in less restrictive operating settings. The efficacy of the proposed algorithm (both stages) is demonstrated using an intelligent packet drop application where it is compared with cumulative mean squared error (cMSE) based priority assignment and random packet dropping. The proposed prioritization algorithm allows for packet drops that result in significantly lower perceptual annoyance at the receiver relative to the other methods considered
    corecore