9,229 research outputs found
Quantifying subjective quality evaluations for mobile video watching in a semi-living lab context
This paper discusses results from an exploratory study in which Quality of Experience aspects related to mobile video watching were investigated in a semi-living lab setting. More specifically, we zoom in on usage patterns in a natural research context and on the subjective evaluation of high and low-resolution movie trailers that are transferred to a mobile device using two transmission protocols for video (i.e., real-time transport protocol and progressive download using HTTP). User feedback was collected by means of short questionnaires on the mobile device, combined with traditional pen and paper diaries. The subjective evaluations regarding the general technical quality, perceived distortion, fluentness of the video, and loading speed are studied and the influence of the transmission protocol and video resolution on these evaluations is analyzed. Multinomial logistic regression results in a model to estimate the subjective evaluations regarding the perceived distortion and loading speed based on objectively-measured parameters of the video session
Subjective and Objective Quality Assessment for in-the-Wild Computer Graphics Images
Computer graphics images (CGIs) are artificially generated by means of
computer programs and are widely perceived under various scenarios, such as
games, streaming media, etc. In practical, the quality of CGIs consistently
suffers from poor rendering during the production and inevitable compression
artifacts during the transmission of multimedia applications. However, few
works have been dedicated to dealing with the challenge of computer graphics
images quality assessment (CGIQA). Most image quality assessment (IQA) metrics
are developed for natural scene images (NSIs) and validated on the databases
consisting of NSIs with synthetic distortions, which are not suitable for
in-the-wild CGIs. To bridge the gap between evaluating the quality of NSIs and
CGIs, we construct a large-scale in-the-wild CGIQA database consisting of 6,000
CGIs (CGIQA-6k) and carry out the subjective experiment in a well-controlled
laboratory environment to obtain the accurate perceptual ratings of the CGIs.
Then, we propose an effective deep learning-based no-reference (NR) IQA model
by utilizing multi-stage feature fusion strategy and multi-stage channel
attention mechanism. The major motivation of the proposed model is to make full
use of inter-channel information from low-level to high-level since CGIs have
apparent patterns as well as rich interactive semantic content. Experimental
results show that the proposed method outperforms all other state-of-the-art NR
IQA methods on the constructed CGIQA-6k database and other CGIQA-related
databases. The database along with the code will be released to facilitate
further research
Understanding user experience of mobile video: Framework, measurement, and optimization
Since users have become the focus of product/service design in last decade, the term User eXperience (UX) has been frequently used in the field of Human-Computer-Interaction (HCI). Research on UX facilitates a better understanding of the various aspects of the user’s interaction with the product or service. Mobile video, as a new and promising service and research field, has attracted great attention. Due to the significance of UX in the success of mobile video (Jordan, 2002), many researchers have centered on this area, examining users’ expectations, motivations, requirements, and usage context. As a result, many influencing factors have been explored (Buchinger, Kriglstein, Brandt & Hlavacs, 2011; Buchinger, Kriglstein & Hlavacs, 2009). However, a general framework for specific mobile video service is lacking for structuring such a great number of factors. To measure user experience of multimedia services such as mobile video, quality of experience (QoE) has recently become a prominent concept. In contrast to the traditionally used concept quality of service (QoS), QoE not only involves objectively measuring the delivered service but also takes into account user’s needs and desires when using the service, emphasizing the user’s overall acceptability on the service. Many QoE metrics are able to estimate the user perceived quality or acceptability of mobile video, but may be not enough accurate for the overall UX prediction due to the complexity of UX. Only a few frameworks of QoE have addressed more aspects of UX for mobile multimedia applications but need be transformed into practical measures. The challenge of optimizing UX remains adaptations to the resource constrains (e.g., network conditions, mobile device capabilities, and heterogeneous usage contexts) as well as meeting complicated user requirements (e.g., usage purposes and personal preferences). In this chapter, we investigate the existing important UX frameworks, compare their similarities and discuss some important features that fit in the mobile video service. Based on the previous research, we propose a simple UX framework for mobile video application by mapping a variety of influencing factors of UX upon a typical mobile video delivery system. Each component and its factors are explored with comprehensive literature reviews. The proposed framework may benefit in user-centred design of mobile video through taking a complete consideration of UX influences and in improvement of mobile videoservice quality by adjusting the values of certain factors to produce a positive user experience. It may also facilitate relative research in the way of locating important issues to study, clarifying research scopes, and setting up proper study procedures. We then review a great deal of research on UX measurement, including QoE metrics and QoE frameworks of mobile multimedia. Finally, we discuss how to achieve an optimal quality of user experience by focusing on the issues of various aspects of UX of mobile video. In the conclusion, we suggest some open issues for future study
Understanding the Perceived Quality of Video Predictions
The study of video prediction models is believed to be a fundamental approach
to representation learning for videos. While a plethora of generative models
for predicting the future frame pixel values given the past few frames exist,
the quantitative evaluation of the predicted frames has been found to be
extremely challenging. In this context, we study the problem of quality
assessment of predicted videos. We create the Indian Institute of Science
Predicted Videos Quality Assessment (IISc PVQA) Database consisting of 300
videos, obtained by applying different prediction models on different datasets,
and accompanying human opinion scores. We collected subjective ratings of
quality from 50 human participants for these videos. Our subjective study
reveals that human observers were highly consistent in their judgments of
quality of predicted videos. We benchmark several popularly used measures for
evaluating video prediction and show that they do not adequately correlate with
these subjective scores. We introduce two new features to effectively capture
the quality of predicted videos, motion-compensated cosine similarities of deep
features of predicted frames with past frames, and deep features extracted from
rescaled frame differences. We show that our feature design leads to state of
the art quality prediction in accordance with human judgments on our IISc PVQA
Database. The database and code are publicly available on our project website:
https://nagabhushansn95.github.io/publications/2020/pvqaComment: Project website:
https://nagabhushansn95.github.io/publications/2020/pvqa.htm
- …