35,824 research outputs found
Audiovisual preservation strategies, data models and value-chains
This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files
Can small be beautiful? assessing image resolution requirements for mobile TV
Mobile TV services are now being offered in several countries, but for cost reasons, most of these services offer material directly recoded for mobile consumption (i.e. without additional editing). The experiment reported in this paper, aims to assess the image resolution and bitrate requirements for displaying this type of material on mobile devices. The study, with 128 participants, examined responses to four different image resolutions, seven video encoding bitrates, two audio bitrates and four content types. The results show that acceptability is significantly lower for images smaller than 168×126, regardless of content type. The effect is more pronounced when bandwidth is abundant, and is due to important detail being lost in the smaller screens. In contrast to previous studies, participants are more likely to rate image quality as unacceptable when the audio quality is high
Bridge the Gap Between VQA and Human Behavior on Omnidirectional Video: A Large-Scale Dataset and a Deep Learning Model
Omnidirectional video enables spherical stimuli with the viewing range. Meanwhile, only the viewport region of omnidirectional
video can be seen by the observer through head movement (HM), and an even
smaller region within the viewport can be clearly perceived through eye
movement (EM). Thus, the subjective quality of omnidirectional video may be
correlated with HM and EM of human behavior. To fill in the gap between
subjective quality and human behavior, this paper proposes a large-scale visual
quality assessment (VQA) dataset of omnidirectional video, called VQA-OV, which
collects 60 reference sequences and 540 impaired sequences. Our VQA-OV dataset
provides not only the subjective quality scores of sequences but also the HM
and EM data of subjects. By mining our dataset, we find that the subjective
quality of omnidirectional video is indeed related to HM and EM. Hence, we
develop a deep learning model, which embeds HM and EM, for objective VQA on
omnidirectional video. Experimental results show that our model significantly
improves the state-of-the-art performance of VQA on omnidirectional video.Comment: Accepted by ACM MM 201
- …