7,289 research outputs found

    Study of saliency in objective video quality assessment

    Get PDF
    Reliably predicting video quality as perceived by humans remains challenging and is of high practical relevance. A significant research trend is to investigate visual saliency and its implications for video quality assessment. Fundamental problems regarding how to acquire reliable eye-tracking data for the purpose of video quality research and how saliency should be incorporated in objective video quality metrics (VQMs) are largely unsolved. In this paper, we propose a refined methodology for reliably collecting eye-tracking data, which essentially eliminates bias induced by each subject having to view multiple variations of the same scene in a conventional experiment. We performed a large-scale eye-tracking experiment that involved 160 human observers and 160 video stimuli distorted with different distortion types at various degradation levels. The measured saliency was integrated into several best known VQMs in the literature. With the assurance of the reliability of the saliency data, we thoroughly assessed the capabilities of saliency in improving the performance of VQMs, and devised a novel approach for optimal use of saliency in VQMs. We also evaluated to what extent the state-of-the-art computational saliency models can improve VQMs in comparison to the improvement achieved by using “ground truth” eye-tracking data. The eye-tracking database is made publicly available to the research community

    Bridge the Gap Between VQA and Human Behavior on Omnidirectional Video: A Large-Scale Dataset and a Deep Learning Model

    Full text link
    Omnidirectional video enables spherical stimuli with the 360×180360 \times 180^ \circ viewing range. Meanwhile, only the viewport region of omnidirectional video can be seen by the observer through head movement (HM), and an even smaller region within the viewport can be clearly perceived through eye movement (EM). Thus, the subjective quality of omnidirectional video may be correlated with HM and EM of human behavior. To fill in the gap between subjective quality and human behavior, this paper proposes a large-scale visual quality assessment (VQA) dataset of omnidirectional video, called VQA-OV, which collects 60 reference sequences and 540 impaired sequences. Our VQA-OV dataset provides not only the subjective quality scores of sequences but also the HM and EM data of subjects. By mining our dataset, we find that the subjective quality of omnidirectional video is indeed related to HM and EM. Hence, we develop a deep learning model, which embeds HM and EM, for objective VQA on omnidirectional video. Experimental results show that our model significantly improves the state-of-the-art performance of VQA on omnidirectional video.Comment: Accepted by ACM MM 201
    corecore