269,002 research outputs found
Testing QoE in Different 3D HDTV Technologies
The three dimensional (3D) display technology has started flooding the consumer television market. There is a number of different systems available with different marketing strategies and different advertised advantages. The main goal of the experiment described in this paper is to compare the systems in terms of achievable Quality of Experience (QoE) in different situations. The display systems considered are the liquid crystal display using polarized light and passive lightweight glasses for the separation of the left- and right-eye images, a plasma display with time multiplexed images and active shutter glasses and a projection system with time multiplexed images and active shutter glasses. As no standardized test methodology has been defined for testing of stereoscopic systems, we develop our own approach to testing different aspects of QoE on different systems without reference using semantic differential scales. We present an analysis of scores with respect to different phenomena under study and define which of the tested aspects can really express a difference in the performance of the considered display technologies
No-reference bitstream-based impairment detection for high efficiency video coding
Video distribution over error-prone Internet Protocol (IP) networks results in visual impairments on the received video streams. Objective impairment detection algorithms are crucial for maintaining a high Quality of Experience (QoE) as provided with IPTV distribution. There is a lot of research invested in H.264/AVC impairment detection models and questions rise if these turn obsolete with a transition to the successor of H.264/AVC, called High Efficiency Video Coding (HEVC). In this paper, first we show that impairments on HEVC compressed sequences are more visible compaired to H.264/AVC encoded sequences. We also show that an impairment detection model designed for H.264/AVC could be reused on HEVC, but that caution is advised. A more accurate model taking into account content classification needed slight modification to remain applicable for HEVC compression video content
Understanding Cognition Across Modalities for the Assessment of Digital Resources
Drawing from the theories of the cognitive process, this paper explores the transmission, retention and transformation of information across oral, written, and digital modes of communication and how these concepts can be used to examine the assessment of digital resource tools. The exploration of interactions across modes of communication is used to gain an understanding of the interaction between the student, digital resource and teacher. Cognitive theory is considered as a basis for the assessment of digital resource tools. Lastly, principles for the assessment of digital resource tools are presented along with how assessment can be incorporated in the educational practice to enhance learning in higher education
Learning to Predict Image-based Rendering Artifacts with Respect to a Hidden Reference Image
Image metrics predict the perceived per-pixel difference between a reference
image and its degraded (e. g., re-rendered) version. In several important
applications, the reference image is not available and image metrics cannot be
applied. We devise a neural network architecture and training procedure that
allows predicting the MSE, SSIM or VGG16 image difference from the distorted
image alone while the reference is not observed. This is enabled by two
insights: The first is to inject sufficiently many un-distorted natural image
patches, which can be found in arbitrary amounts and are known to have no
perceivable difference to themselves. This avoids false positives. The second
is to balance the learning, where it is carefully made sure that all image
errors are equally likely, avoiding false negatives. Surprisingly, we observe,
that the resulting no-reference metric, subjectively, can even perform better
than the reference-based one, as it had to become robust against
mis-alignments. We evaluate the effectiveness of our approach in an image-based
rendering context, both quantitatively and qualitatively. Finally, we
demonstrate two applications which reduce light field capture time and provide
guidance for interactive depth adjustment.Comment: 13 pages, 11 figure
Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression
In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield competitive results. In this paper, we present a novel no-reference bitstream-based objective video quality metric that is constructed by genetic programming-based symbolic regression. A key benefit of this approach is that it calculates reliable white-box models that allow us to determine the importance of the parameters. Additionally, these models can provide human insight into the underlying principles of subjective video quality assessment. Numerical results show that perceived quality can be modeled with high accuracy using only parameters extracted from the received video bitstream
Quality criteria benchmark for hyperspectral imagery
Hyperspectral data appear to be of a growing interest
over the past few years. However, applications for hyperspectral
data are still in their infancy as handling the significant size of
the data presents a challenge for the user community. Efficient
compression techniques are required, and lossy compression,
specifically, will have a role to play, provided its impact on remote
sensing applications remains insignificant. To assess the data
quality, suitable distortion measures relevant to end-user applications
are required. Quality criteria are also of a major interest
for the conception and development of new sensors to define their
requirements and specifications. This paper proposes a method to
evaluate quality criteria in the context of hyperspectral images.
The purpose is to provide quality criteria relevant to the impact
of degradations on several classification applications. Different
quality criteria are considered. Some are traditionnally used in
image and video coding and are adapted here to hyperspectral
images. Others are specific to hyperspectral data.We also propose
the adaptation of two advanced criteria in the presence of different
simulated degradations on AVIRIS hyperspectral images. Finally,
five criteria are selected to give an accurate representation of the
nature and the level of the degradation affecting hyperspectral
data
- âŠ