4 research outputs found

    Audiovisual focus of attention and its application to Ultra High Definition video compression

    Get PDF
    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed

    Subjective quality evaluation of foveated video coding using audio-visual focus of attention

    Get PDF
    This paper presents a foveated coding method using audio-visual focus of attention and its evaluation through extensive subjective experiments on both standard definition and high definition sequences. Regarding a sound-emitting region as the location drawing the human attention, the method applies varying quality levels in an image frame according to the distance of a pixel to the identified sound source. Two experiments are presented to prove the efficiency of the method. Experiment 1 examines the validity and effectiveness of the method in comparison to the constant quality coding for high quality conditions. In Experiment 2, the method is compared to the fixed bit rate coding for low quality conditions where coding artifacts are noticeable. The results demonstrate that the foveated coding method provides considerable coding gain without significant quality degradation, but uneven distributions of the coding artifacts (blockiness) by the method are often less preferred than the uniform distribution of the artifacts. Additional interesting findings are also discussed, such as content dependence of the performance of the method, the memory effect in multiple viewings, and the difference in the quality perception for frame size variations

    Audio-driven Nonlinear Video Diffusion

    Get PDF
    In this paper we present a novel nonlinear video diffusion approach based on the fusion of information in audio and video channels. Both modalities are efficiently combined into a diffusion coefficient that integrates the basic assumption in this domain, i.e. related events in audio and video channels occur approximately at the same time. The proposed diffusion coefficient depends thus on an estimate of the synchrony between sounds and video motion. As a result, information in video parts whose motion is not coherent with the soundtrack is reduced and the sound sources are automatically highlighted. Several tests on challenging real-world sequences presenting important auditive and/or visual distractors demonstrate that our approach is able to prevail regions which are related to the soundtrack. In addition, we propose an application to the extraction of audio-related video regions by unsupervised segmentation in order to illustrate the capabilities of our method. To the best of our knowledge, this is the first nonlinear video diffusion approach which integrates information from the audio modality

    Efficient Video Coding in H.264/AVC by using Audio-Visual Information

    Get PDF
    Abstract—This paper proposes an efficient video coding method which utilizes audio-visual information, based on the observation that sound-emitting regions in a video sequence attract observer’s attention. The regions responsible for the sound are identified by an audio-visual source localization algorithm. Then, the result is used for encoding different regions in the scene with different quality in such a way that a region far from the sound source is coded with a lesser quality than the sound-emitting regions. This is implemented by assigning different quantization parameter values for different regions in H.264/AVC. Experimental results demonstrate the effectiveness of the proposed approach. I
    corecore