1,358,037 research outputs found
Quality delivery of mobile video: In-depth understanding of user requirements
The increase of powerful mobile devices has accelerated the demand for mobile videos. Previous studies in mobile video have focused on understanding of mobile video usage, improvement of video quality, and user interface design in video browsing. However, research focusing on a deep understanding of users’ needs for a pleasing quality delivery of mobile video is lacking. In particular, what quality-delivery mode users prefer and what information relevant to video quality they need requires attention. This paper presents a qualitative interview study with 38 participants to gain an insight into three aspects: influencing factors of user-desired video quality, user-preferred quality-delivery modes, and user-required interaction information of mobile video. The results show that user requirements for video quality are related to personal preference, technology background and video viewing experience, and the preferred quality-delivery mode and interactive mode are diverse. These complex user requirements call for flexible and personalised quality delivery and interaction of mobile video
Q-AIMD: A Congestion Aware Video Quality Control Mechanism
Following the constant increase of the multimedia traffic, it seems necessary to allow transport protocols to be aware of the video quality of the transmitted flows rather than the throughput. This paper proposes a novel transport mechanism adapted to video flows. Our proposal, called Q-AIMD for video quality AIMD (Additive Increase Multiplicative Decrease), enables fairness in video quality while transmitting multiple video flows. Targeting video quality fairness allows improving the overall video quality for all transmitted flows, especially when the transmitted videos provide various types of content with different spatial resolutions. In addition, Q-AIMD mitigates the occurrence of network congestion events, and dissolves the congestion whenever it occurs by decreasing the video quality and hence the bitrate. Using different video quality metrics, Q-AIMD is evaluated with different video contents and spatial resolutions. Simulation results show that Q-AIMD allows an improved overall video quality among the multiple transmitted video flows compared to a throughput-based congestion control by decreasing significantly the quality discrepancy between them
Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression
In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield competitive results. In this paper, we present a novel no-reference bitstream-based objective video quality metric that is constructed by genetic programming-based symbolic regression. A key benefit of this approach is that it calculates reliable white-box models that allow us to determine the importance of the parameters. Additionally, these models can provide human insight into the underlying principles of subjective video quality assessment. Numerical results show that perceived quality can be modeled with high accuracy using only parameters extracted from the received video bitstream
Dynamic optimization of the quality of experience during mobile video watching
Mobile video consumption through streaming is becoming increasingly popular. The video parameters for an optimal quality are often automatically determined based on device and network conditions. Current mobile video services typically decide on these parameters before starting the video streaming and stick to these parameters during video playback. However in a mobile environment, conditions may change significantly during video playback. Therefore, this paper proposes a dynamic optimization of the quality taking into account real-time data regarding network, device, and user movement during video playback. The optimization method is able to change the video quality level during playback if changing conditions require this. Through a user test, the dynamic optimization is compared with a traditional, static, quality optimization method. The results showed that our optimization can improve the perceived playback and video quality, especially under varying network conditions
High definition H.264/AVC subjective video database for evaluating the influence of slice losses on quality perception
Prior to the construction or validation of objective video quality metrics, ground-truth data must be collected by means of a subjective video database. This database consists of (impaired) video sequences and corresponding subjective quality ratings. However, creating this subjective database is a timeconsuming and expensive task. There is an ongoing effort towards publishing such subjective video databases into the public domain. This facilitates the development of new objective quality metrics. In this paper, we present a new subjective video database consisting of impaired High Definition H. 264/AVC encoded video sequences and associated quality ratings gathered from a subjective experiment. This database can be used freely to determine impairment visibility or estimate overall quality of a video in the case of lost slices due to network impairments
Xstream-x264: Real-time H.264 streaming with cross-layer integration
We present Xstream-x264: a real-time cross-layer video streaming technique implemented within a well known open-source H.264 video encoder tool x264. Xstream-x264 uses the transport protocol provided indication of the available data rate for corresponding adjustments in the video encoder.We discuss the design, implementation and the quality evaluation methodology utilised with our tool.We demonstrate via experimental results that the streaming video quality greatly improves with the presented cross-layer approach both in terms of lost frame count and the objective video quality metrics Peak Signal to Noise Ratio (PSNR)
A reduced-reference perceptual image and video quality metric based on edge preservation
In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence-prior to compression and transmission-is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric. © 2012 Martini et al
Video Tester -- A multiple-metric framework for video quality assessment over IP networks
This paper presents an extensible and reusable framework which addresses the
problem of video quality assessment over IP networks. The proposed tool
(referred to as Video-Tester) supports raw uncompressed video encoding and
decoding. It also includes different video over IP transmission methods (i.e.:
RTP over UDP unicast and multicast, as well as RTP over TCP). In addition, it
is furnished with a rich set of offline analysis capabilities. Video-Tester
analysis includes QoS and bitstream parameters estimation (i.e.: bandwidth,
packet inter-arrival time, jitter and loss rate, as well as GOP size and
I-frame loss rate). Our design facilitates the integration of virtually any
existing video quality metric thanks to the adopted Python-based modular
approach. Video-Tester currently provides PSNR, SSIM, ITU-T G.1070 video
quality metric, DIV and PSNR-based MOS estimations. In order to promote its use
and extension, Video-Tester is open and publicly available.Comment: 5 pages, 5 figures. For the Google Code project, see
http://video-tester.googlecode.com
- …
