17,420 research outputs found

    Video streaming

    Get PDF

    Video Tester -- A multiple-metric framework for video quality assessment over IP networks

    Full text link
    This paper presents an extensible and reusable framework which addresses the problem of video quality assessment over IP networks. The proposed tool (referred to as Video-Tester) supports raw uncompressed video encoding and decoding. It also includes different video over IP transmission methods (i.e.: RTP over UDP unicast and multicast, as well as RTP over TCP). In addition, it is furnished with a rich set of offline analysis capabilities. Video-Tester analysis includes QoS and bitstream parameters estimation (i.e.: bandwidth, packet inter-arrival time, jitter and loss rate, as well as GOP size and I-frame loss rate). Our design facilitates the integration of virtually any existing video quality metric thanks to the adopted Python-based modular approach. Video-Tester currently provides PSNR, SSIM, ITU-T G.1070 video quality metric, DIV and PSNR-based MOS estimations. In order to promote its use and extension, Video-Tester is open and publicly available.Comment: 5 pages, 5 figures. For the Google Code project, see http://video-tester.googlecode.com

    A Matlab-Based Tool for Video Quality Evaluation without Reference

    Get PDF
    This paper deals with the design of a Matlab based tool for measuring video quality with no use of a reference sequence. The main goals are described and the tool and its features are shown. The paper begins with a description of the existing pixel-based no-reference quality metrics. Then, a novel algorithm for simple PSNR estimation of H.264/AVC coded videos is presented as an alternative. The algorithm was designed and tested using publicly available video database of H.264/AVC coded videos. Cross-validation was used to confirm the consistency of results

    No-reference bitstream-based impairment detection for high efficiency video coding

    Get PDF
    Video distribution over error-prone Internet Protocol (IP) networks results in visual impairments on the received video streams. Objective impairment detection algorithms are crucial for maintaining a high Quality of Experience (QoE) as provided with IPTV distribution. There is a lot of research invested in H.264/AVC impairment detection models and questions rise if these turn obsolete with a transition to the successor of H.264/AVC, called High Efficiency Video Coding (HEVC). In this paper, first we show that impairments on HEVC compressed sequences are more visible compaired to H.264/AVC encoded sequences. We also show that an impairment detection model designed for H.264/AVC could be reused on HEVC, but that caution is advised. A more accurate model taking into account content classification needed slight modification to remain applicable for HEVC compression video content

    Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression

    Get PDF
    In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield competitive results. In this paper, we present a novel no-reference bitstream-based objective video quality metric that is constructed by genetic programming-based symbolic regression. A key benefit of this approach is that it calculates reliable white-box models that allow us to determine the importance of the parameters. Additionally, these models can provide human insight into the underlying principles of subjective video quality assessment. Numerical results show that perceived quality can be modeled with high accuracy using only parameters extracted from the received video bitstream

    No-reference bitstream-based visual quality impairment detection for high definition H.264/AVC encoded video sequences

    Get PDF
    Ensuring and maintaining adequate Quality of Experience towards end-users are key objectives for video service providers, not only for increasing customer satisfaction but also as service differentiator. However, in the case of High Definition video streaming over IP-based networks, network impairments such as packet loss can severely degrade the perceived visual quality. Several standard organizations have established a minimum set of performance objectives which should be achieved for obtaining satisfactory quality. Therefore, video service providers should continuously monitor the network and the quality of the received video streams in order to detect visual degradations. Objective video quality metrics enable automatic measurement of perceived quality. Unfortunately, the most reliable metrics require access to both the original and the received video streams which makes them inappropriate for real-time monitoring. In this article, we present a novel no-reference bitstream-based visual quality impairment detector which enables real-time detection of visual degradations caused by network impairments. By only incorporating information extracted from the encoded bitstream, network impairments are classified as visible or invisible to the end-user. Our results show that impairment visibility can be classified with a high accuracy which enables real-time validation of the existing performance objectives

    Comparing objective visual quality impairment detection in 2D and 3D video sequences

    Get PDF
    The skill level of teleoperator plays a key role in the telerobotic operation. However, plenty of experiments are required to evaluate the skill level in a conventional assessment. In this paper, a novel brain-based method of skill assessment is introduced, and the relationship between the teleoperator's brain states and skill level is first investigated based on a kernel canonical correlation analysis (KCCA) method. The skill of teleoperator (SoT) is defined by a statistic method using the cumulative probability function (CDF). Five indicators are extracted from the electroencephalo-graph (EEG) of the teleoperator to represent the brain states during the telerobotic operation. By using the KCCA algorithm in modeling the relationship between the SoT and the brain states, the correlation has been proved. During the telerobotic operation, the skill level of teleoperator can be well predicted through the brain states. © 2013 IEEE.Link_to_subscribed_fulltex
    corecore