77,801 research outputs found

    No-reference bitstream-based visual quality impairment detection for high definition H.264/AVC encoded video sequences

    Get PDF
    Ensuring and maintaining adequate Quality of Experience towards end-users are key objectives for video service providers, not only for increasing customer satisfaction but also as service differentiator. However, in the case of High Definition video streaming over IP-based networks, network impairments such as packet loss can severely degrade the perceived visual quality. Several standard organizations have established a minimum set of performance objectives which should be achieved for obtaining satisfactory quality. Therefore, video service providers should continuously monitor the network and the quality of the received video streams in order to detect visual degradations. Objective video quality metrics enable automatic measurement of perceived quality. Unfortunately, the most reliable metrics require access to both the original and the received video streams which makes them inappropriate for real-time monitoring. In this article, we present a novel no-reference bitstream-based visual quality impairment detector which enables real-time detection of visual degradations caused by network impairments. By only incorporating information extracted from the encoded bitstream, network impairments are classified as visible or invisible to the end-user. Our results show that impairment visibility can be classified with a high accuracy which enables real-time validation of the existing performance objectives

    Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression

    Get PDF
    In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield competitive results. In this paper, we present a novel no-reference bitstream-based objective video quality metric that is constructed by genetic programming-based symbolic regression. A key benefit of this approach is that it calculates reliable white-box models that allow us to determine the importance of the parameters. Additionally, these models can provide human insight into the underlying principles of subjective video quality assessment. Numerical results show that perceived quality can be modeled with high accuracy using only parameters extracted from the received video bitstream

    Medidas de calidad subjetiva en secuencias de vídeo

    Get PDF
    El objetivo principal de este proyecto es el desarrollo de una Medida Objetiva de Calidad Perceptual de secuencias de vídeo. Usando el algoritmo diseñado se implementará una aplicación que, de forma automática, proporcione una estimación de la calidad subjetiva de una secuencia a partir de su correspondiente referencia. Para ello se investigarán cuáles son los atributos visuales de mayor relevancia en la determinación de la calidad de vídeo, analizando los distintos componentes del Sistema Visual Humano. Adicionalmente se estudiará el rendimiento de tres de las medidas objetivas más extendidas: la Medida de Calidad de Vídeo NTIA (modelo “general”), la Medida de Calidad de Vídeo Digital de Watson modificada y la Medida de Calidad de Vídeo VSSIM. Se describirá, así mismo, la prueba experimental llevada a cabo como parte fundamental del proyecto. El propósito de la prueba, realizada según las correspondientes recomendaciones, es obtener valoraciones subjetivas a partir de un conjunto de observadores humanos. Esta información será utilizada no sólo para el diseño de la aplicación sino también para la posterior evaluación y comparación de los distintos algoritmos.---------------------------------------------------------------------------The main goal of this project is to develop a Perceptual Quality Metric for video sequences. Using the designed algorithm, an automatic application will be implemented which will be able to predict subjective quality of a video sequence based on the corresponding reference. To do this, the most important visual attributes for determining video quality will be investigated, analyzing different properties on Human Visual System. In addition, the performance of three widely-known objective video quality metrics will be studied. These metrics are the NTIA video quality metric (“general” model), a modified Watson’s DVQ metric, and the VSSIM metric. Moreover, it is described the experimental test performed to obtain subjective values from a group of human observers. The test, essential part of this project, was realized according to the corresponding recommendations. The achieved information will be used not only to design the application but also for the subsequent algorithm assessment and comparison.Ingeniería Técnica en Sonido e Image

    Burst-by-Burst Adaptive Decision Feedback Equalised TCM, TTCM and BICM for H.263-Assisted Wireless Video Telephony

    No full text
    Decision Feedback Equaliser (DFE) aided wideband Burst-by-Burst (BbB) Adaptive Trellis Coded Modulation (TCM), Turbo Trellis Coded Modulation (TTCM) and Bit-Interleaved Coded Modulation (BICM) assisted H.263-based video transceivers are proposed and characterised in performance terms when communicating over the COST 207 Typical Urban wideband fading channel. Specifically, four different modulation modes, namely 4QAM, 8PSK, 16QAM and 64QAM are invoked and protected by the above-mentioned coded modulation schemes. The TTCM assisted scheme was found to provide the best video performance, although at the cost of the highest complexity. A range of lower-complexity arrangements will also be characterised. Finally, in order to confirm these findings in an important practical environment, we have also investigated the adaptive TTCM scheme in the CDMA-based Universal Mobile Telecommunications System's (UMTS) Terrestrial Radio Access (UTRA) scenario and the good performance of adaptive TTCM scheme recorded when communicating over the COST 207 channels was retained in the UTRA environment
    corecore