603 research outputs found
Shaping the auditory peripersonal space with motor planning in immersive virtual reality
Immersive audio technologies require personalized binaural synthesis through headphones to provide perceptually plausible virtual and augmented reality (VR/AR) simulations. We introduce and apply for the first time in VR contexts the quantitative measure called premotor reaction time (pmRT) for characterizing sonic interactions between humans and the technology through motor planning. In the proposed basic virtual acoustic scenario, listeners are asked to react to a virtual sound approaching from different directions and stopping at different distances within their peripersonal space (PPS). PPS is highly sensitive to embodied and environmentally situated interactions, anticipating the motor system activation for a prompt preparation for action. Since immersive VR applications benefit from spatial interactions, modeling the PPS around the listeners is crucial to reveal individual behaviors and performances. Our methodology centered around the pmRT is able to provide a compact description and approximation of the spatiotemporal PPS processing and boundaries around the head by replicating several well-known neurophysiological phenomena related to PPS, such as auditory asymmetry, front/back calibration and confusion, and ellipsoidal action fields
The Plausibility of a String Quartet Performance in Virtual Reality
We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a
virtual environment that depicts the performance of a string quartet. ‘Plausibility’ refers to the component of presence that is the
illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians
ignored the participant, the musicians sometimes looked towards and followed the participant’s movements), Sound Spatialization
(Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived,
reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind
corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first
able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times
participants started from a low setting on all features and were able to make transitions from one system configuration to another until
they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also
probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most
important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and
then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without
questionnaires, and showing how various aspects of a musical performance can influence plausibility
Measuring the Behavioral Response to Spatial Audio within a Multi-Modal Virtual Reality Environment in Children with Autism Spectrum Disorder
Virtual Reality (VR) has been an active area of research in the development of interactive interventions for individuals with autism spectrum disorder (ASD) for over two decades. These immersive environments create a safe platform in which therapy can address the core symptoms associated with this condition. Recent advancements in spatial audio rendering techniques for VR now allow for the creation of realistic audio environments that accurately match their visual counterparts. However, reported auditory processing impairments associated with autism may affect how an individual interacts with their virtual therapy application. This study aims to investigate if these difficulties in processing audio information would directly impact how individuals with autism interact with a presented virtual spatial audio environment. Two experiments were conducted with participants diagnosed with ASD (n = 29) that compared: (1) behavioral reaction between spatialized and non-spatialized audio; and (2) the effect of background noise on participant interaction. Participants listening to binaural-based spatial audio showed higher spatial attention towards target auditory events. In addition, the amount of competing background audio was reported to influence spatial attention and interaction. These findings suggest that despite associated sensory processing difficulties, those with ASD can correctly decode the auditory cues simulated in current spatial audio rendering techniques
Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019
International audienc
Spatial auditory display for acoustics and music collections
PhDThis thesis explores how audio can be better incorporated into how people access
information and does so by developing approaches for creating three-dimensional audio
environments with low processing demands. This is done by investigating three research
questions.
Mobile applications have processor and memory requirements that restrict the
number of concurrent static or moving sound sources that can be rendered with binaural
audio. Is there a more e cient approach that is as perceptually accurate as the traditional
method? This thesis concludes that virtual Ambisonics is an ef cient and accurate means
to render a binaural auditory display consisting of noise signals placed on the horizontal
plane without head tracking. Virtual Ambisonics is then more e cient than convolution
of HRTFs if more than two sound sources are concurrently rendered or if movement of
the sources or head tracking is implemented.
Complex acoustics models require signi cant amounts of memory and processing. If
the memory and processor loads for a model are too large for a particular device, that
model cannot be interactive in real-time. What steps can be taken to allow a complex
room model to be interactive by using less memory and decreasing the computational
load? This thesis presents a new reverberation model based on hybrid reverberation
which uses a collection of B-format IRs. A new metric for determining the mixing
time of a room is developed and interpolation between early re
ections is investigated.
Though hybrid reverberation typically uses a recursive lter such as a FDN for the late
reverberation, an average late reverberation tail is instead synthesised for convolution
reverberation.
Commercial interfaces for music search and discovery use little aural information
even though the information being sought is audio. How can audio be used in
interfaces for music search and discovery? This thesis looks at 20 interfaces and
determines that several themes emerge from past interfaces. These include using a two
or three-dimensional space to explore a music collection, allowing concurrent playback of
multiple sources, and tools such as auras to control how much information is presented. A
new interface, the amblr, is developed because virtual two-dimensional spaces populated
by music have been a common approach, but not yet a perfected one. The amblr is also
interpreted as an art installation which was visited by approximately 1000 people over 5
days. The installation maps the virtual space created by the amblr to a physical space
Sonic Interactions in Virtual Environments
This open access book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments
When the Elephant Trumps": A Comparative Study on Spatial Audio for Orientation in 360◦ Videos
Orientation is an emerging issue in cinematic Virtual Reality
(VR), as viewers may fail in locating points of interest. Recent
strategies to tackle this research problem have investigated
the role of cues, specifically diegetic sound effects. In this
paper, we examine the use of sound spatialization for orien tation purposes, namely by studying different spatialization
conditions ("none", "partial", and "full" spatial manipulation)
of multitrack soundtracks. We performed a between-subject
mixed-methods study with 36 participants, aided by Cue
Control, a tool we developed for dynamic spatial sound edit ing and data collection/analysis. Based on existing literature
on orientation cues in 360◦
and theories on human listening,
we discuss situations in which the spatialization was more ef fective (namely, "full" spatial manipulation both when using
only music and when combining music and diegetic effects),
and how this can be used by creators of 360◦ videos.info:eu-repo/semantics/publishedVersio
Sonic interactions in virtual environments
This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments
- …