53 research outputs found
Open collaboration on hybrid video quality models - VQEG joint effort group hybrid
International audienceSeveral factors limit the advances on automatizing video quality measurement. Modelling the human visual system requires multi- and interdisciplinary efforts. A joint effort may bridge the large gap between the knowledge required in conducting a psychophysical experiment on isolated visual stimuli to engineering a universal model for video quality estimation under real-time constraints. The verification and validation requires input reaching from professional content production to innovative machine learning algorithms. Our paper aims at highlighting the complex interactions and the multitude of open questions as well as industrial requirements that led to the creation of the Joint Effort Group in the Video Quality Experts Group. The paper will zoom in on the first activity, the creation of a hybrid video quality model
Freely Available Large-scale Video Quality Assessment Database in Full-HD Resolution with H.264 Coding
International audienceVideo databases often focus on a particular use case with a limited set of sequences. In this paper, a different type of database creation is proposed: an exhaustive number of test conditions will be continuously created and made freely available for objective and subjective evaluation. At the moment, the database comprises more than ten thousand JM/x264-encoded video sequences. An extensive study of the possible encoding parameter space led to a first subset selection of 1296 configura- tions. At the moment, only ten source sequences have been used, but extension to more than one hundred sequences is planned. Some Full-Reference (FR) and No-Reference (NR) metrics were selected and calculated. The resulting data will be freely available to the research community and possible exploitation areas are suggested
Systematic Analysis of Experiment Precision Measures and Methods for Experiments Comparison
The notion of experiment precision quantifies the variance of user ratings in
a subjective experiment. Although there exist measures that assess subjective
experiment precision, there are no systematic analyses of these measures
available in the literature. To the best of our knowledge, there is also no
systematic framework in the Multimedia Quality Assessment field for comparing
subjective experiments in terms of their precision. Therefore, the main idea of
this paper is to propose a framework for comparing subjective experiments in
the field of MQA based on appropriate experiment precision measures. We present
three experiment precision measures and three related experiment precision
comparison methods. We systematically analyse the performance of the measures
and methods proposed. We do so both through a simulation study (varying user
rating variance and bias) and by using data from four real-world Quality of
Experience (QoE) subjective experiments. In the simulation study we focus on
crowdsourcing QoE experiments, since they are known to generate ratings with
higher variance and bias, when compared to traditional subjective experiment
methodologies. We conclude that our proposed measures and related comparison
methods properly capture experiment precision (both when tested on simulated
and real-world data). One of the measures also proves capable of dealing with
even significantly biased responses. We believe our experiment precision
assessment framework will help compare different subjective experiment
methodologies. For example, it may help decide which methodology results in
more precise user ratings. This may potentially inform future standardisation
activities.Comment: 18 pages, 9 figures. Under review in IEEE Transactions on Multimedia.
More results and references added. Improved style. Discussion section and
appendices extende
Standardized toolchain and model development for video quality assessment: the mission of the joint effort group in VQEG
International audienceSince 1997, the Video Quality Experts Group (VQEG) has been active in the field of subjective and objective video quality assessment. The group has validated competitive quality metrics throughout several projects. Each of these projects requires mandatory actions such as creating a testplan and obtaining databases consisting of degraded video sequences with corresponding subjective quality ratings. Recently, VQEG started a new open initiative, the Joint Effort Group (JEG), for encouraging joint collaboration on all mandatory actions needed to validate video quality metrics. Within the JEG, effort is made to advance the field of both subjective and objective video quality measurement by providing proper software tools and subjective databases to the community. One of the subprojects of the JEG is the joint development of a hybrid H.264/AVC objective quality metric. In this paper, we introduce the JEG and provide an overview of the different ongoing activities within this newly started group
No-reference bitstream-based impairment detection for high efficiency video coding
Video distribution over error-prone Internet Protocol (IP) networks results in visual impairments on the received video streams. Objective impairment detection algorithms are crucial for maintaining a high Quality of Experience (QoE) as provided with IPTV distribution. There is a lot of research invested in H.264/AVC impairment detection models and questions rise if these turn obsolete with a transition to the successor of H.264/AVC, called High Efficiency Video Coding (HEVC). In this paper, first we show that impairments on HEVC compressed sequences are more visible compaired to H.264/AVC encoded sequences. We also show that an impairment detection model designed for H.264/AVC could be reused on HEVC, but that caution is advised. A more accurate model taking into account content classification needed slight modification to remain applicable for HEVC compression video content
Modeling of Quality of Experience in No-Reference Model, Journal of Telecommunications and Information Technology, 2017, nr 2
The key objective of no-reference (NR) visual metrics is to predict the end-user experience concerning remotely delivered video content. Rapidly increasing demand for easily accessible, high quality video material makes it crucial for service providers to test the user experience without the need for comparison with reference material. Nevertheless, the QoE measurement is not enough. The information about the source or error is very important as well. Therefore, the described system is based on calculating numerous different NR indicators, which are combined to provide the overall quality score. In this paper, more quality indicators than are used in the QoE calculation are described, since some of them detect specific errors. Such specific errors are dificult to include in a global QoE model but are important from the operation point of view
- …