3 research outputs found

    Influence of vision on short-term sound localization training with non-individualized HRTF

    Get PDF
    International audiencePrevious studies have demonstrated that it is possible for humans to adapt to new HRTF, non-individualized or altered, in a short time period. While natural adaptation, through sound exposure, takes several weeks [1], some training programs have been employed to accelerate adaptation and improve performance on sound localization in a few days (see [2] for a review). The majority of these training programs are based on audio-visual positional or response feedback learning [3] (participants correct their answer after seeing the target position), or on active learning, for example through audio-proprioceptive manipulations [4] (blindfolded participants actively explore the sphere around them by playing a mini sonified version of hot-and-cold game). While all training programs are based on a bimodal coupling (audio-vision [3] or audio-proprioception [4]), they are rarely based on a trimodal one. Therefore, if vision is not necessary for adaptation [4], and audio-visual training can even be less efficient than other methods [1,2], the role of vision in short-term audio localization training remains unclear, especially when action and proprioception are already involved. Our study compares two versions of active trainings: an audio-proprioceptive one and an audio-visuo-proprioceptive one. We hypothesize that combining all modalities leads to better adaptation inducing better performances and a longer remaining effect.The experiment is developed in virtual reality using a HTC Vive as a head- and hand-tracker. 3D audio spatialization is obtained through Steam Audio’s non-individualized built-in HRTF. When applicable, 3D visual information is displayed directly on the Vive screen. A total of 36 participants, equally distributed in 3 groups (G1 to G3), participate in this between-subject design study.G1 is a control group receiving no training session, while the 2 other groups receive a training session of 12 minutes during 3 consecutive days. All the participants also had to perform 5 sound localization tests (no feedback, hand-pointing techniques, 2 repetitions × 33 positions, frontal space): one before the experiment, one after each training session, and the last one 1 week after the first day in order to evaluate the remaining effect. G2 receives an audio-proprioceptive training as exposed in [4]. Participants have to freely scan the space around them with their hand-held Vive controller to find an animal sound hidden around them. The controller-to-target angular distance is sonified and spatialized at the controller position. No visual information is provided. G3 receives the same task as in G2 but, a visual representation of a sphere is also displayed at the hand position during all training sessions (audio-visuo-proprioceptive situation). We measure the angular error in azimuth and elevation during localization tests. Performances are also analyzed in interaural polar coordinate system to discuss front/back and up/down confusion errors. Data from training sessions are logged (total number of found animals and detailed sequence of hand positions) to evaluate how training and vision influence scanning strategy. The experimental phase is taking place right now (10 participants have completed it for the moment) and extends until the end of April. Complete results will be available for the final version of the paper in June. References [1] Carlile, S., and Blackman, T. Relearning auditory spectral cues for locations inside and outside the visual field. J. Assoc. Res. Otolaryngol. 15, 249–263 (2014)[2] Strelnikov, K., Rosito, M., and Barrone, P. Effect of audiovisual training on monaural spatial hearing in horizontal plane. PLoS ONE 6:e18344 (2011)[3] Mendonça, C. A review on auditory space adaptation to altered head-related cues. Front. Neurosci. 8, 219 (2014)[4] Parseihian, G. & Katz, B.F.G. Rapid head-related transfer function adaptation using a virtual auditory environment. J. Acous. Soc. of America 131, 2948–2957 (2012

    Feasibility and relevance of an immersive virtual reality cancellation task assessing far space in unilateral spatial neglect

    No full text
    International audienceUnilateral spatial neglect (USN) is a highly prevalent neuropsychological syndrome. However, its assessment in clinical practice, mainly based on paper-and-pencil tests, encounters limits as only near space, called peripersonal, is assessed. However, USN is a multicomponent syndrome that can also affect far space, called extrapersonal. This space is not assessed in current clinical assessment although it can be more impacted than peripersonal space. Immersive virtual reality (VR) allows developing tasks in far space to assess this heterogeneity. This study aimed to test the feasibility and the relevance of an immersive VR task to assess far space. A cancellation task, the Bells test, was used in its original paper-and-pencil version and was also adapted into a far immersive VR version. Ten patients with left USN and sixteen age-matched healthy participants were included. A single-case method was performed to investigate the performance of each patient. Although five patients showed very similar results between both versions, the five others exhibited a dissociation with a more severe impairment in the VR version. Three of these five patients significantly differed from the healthy participants only on the VR version. As USN in far space is not brought to light by paper-and-pencil tests, immersive VR appears as a promising tool to detect USN affecting this space
    corecore