4 research outputs found

    D-SAV360: A Dataset of Gaze Scanpaths on 360° Ambisonic Videos

    Get PDF
    Understanding human visual behavior within virtual reality environments is crucial to fully leverage their potential. While previous research has provided rich visual data from human observers, existing gaze datasets often suffer from the absence of multimodal stimuli. Moreover, no dataset has yet gathered eye gaze trajectories (i.e., scanpaths) for dynamic content with directional ambisonic sound, which is a critical aspect of sound perception by humans. To address this gap, we introduce D-SAV360, a dataset of 4,609 head and eye scanpaths for 360° videos with first-order ambisonics. This dataset enables a more comprehensive study of multimodal interaction on visual behavior in virtual reality environments. We analyze our collected scanpaths from a total of 87 participants viewing 85 different videos and show that various factors such as viewing mode, content type, and gender significantly impact eye movement statistics. We demonstrate the potential of D-SAV360 as a benchmarking resource for state-of-the-art attention prediction models and discuss its possible applications in further research. By providing a comprehensive dataset of eye movement data for dynamic, multimodal virtual environments, our work can facilitate future investigations of visual behavior and attention in virtual reality

    Binaural Reproduction of Higher Order Ambisonics - A Real-Time Implementation and Perceptual Improvements

    Get PDF
    During the last decade, Higher Order Ambisonics has become a popular way of capturing and reproducing sound fields. It can be combined with the theory of spherical microphone arrays to record sound fields, and this three-dimensional audio format can be reproduced with loudspeakers or headphones and even rotated around the listener. A drawback is that near perfect reproduction is only possible inside a sphere of radius r given by kr < N, where N is the Ambisonics order and k is the wavenumber. In this thesis, the theory of spherical harmonics and Higher Order Ambisonics has been reviewed and expanded, which serves as a foundation for a real-time system that was implemented. This system can record signals from a commercial spherical microphone array, convert them to the Higher Order Ambisonics format, and reproduce the sound field through headphones. To compensate for head motion, a head-tracking device is used. The real-time system operates with a latency of around 95 milliseconds between head motion and consequent sound field rotation. Further, two new methods for improving the headphone reproduction were assessed. These methods do not need to be applied in real-time, so no further system resources are used. Simulations of headphone reproduction with Higher Order Ambisonics show that both methods yield quantitative improvements in binaural cues such as the Interaural Level Difference, spectral cues and spectral coloration of the sound field. Median error values are reduced as much as 50 % between 4 and 7 kHz. The findings indicate that Higher Order Ambisonics reproduction over headphones can be improved at frequencies above limit frequency given by kr < N, but these findings need to be confirmed by subjective assessments, such as listening tests. The work conducted in this thesis has also resulted in a comprehensive basis for further development of a real-time three-dimensional audio reproduction system
    corecore