Training spatial hearing skills in virtual reality through a sound-reaching task

Abstract

Sound localization is crucial for interacting with the surrounding world. This ability can be learned across time and improved by multisensory and motor cues. In the last decade, studying the contributions of multisensory and motor cues has been facilitated by the increased adoption of virtual reality (VR). In a recent study, sound localization had been trained through a task where the visual stimuli were rendered through a VR headset, and the auditory ones through a loudspeaker moved around by the experimenter. Physically reaching to sound sources reduced sound localization errors faster and to a greater extent if compared to naming sources’ positions. Interestingly, training efficacy extended also to hearing-impaired people. Yet, this approach is unfeasible for rehabilitation at home. Fullyvirtual approaches have been used to study spatial hearing learning processes, performing headphones-rendered acoustic simulations. In the present study, we investigate whether the effects of our reaching-based training can be observed when taking advantage of such simulations, showing that the improvement is comparable between the full-VR and blended VR conditions. This validates the use of training paradigms that are completely based on portable equipment and don’t require an external operator, opening new perspectives in the field of remote rehabilitation

    Similar works