277 research outputs found

    Auditory Displays and Assistive Technologies: the use of head movements by visually impaired individuals and their implementation in binaural interfaces

    Get PDF
    Visually impaired people rely upon audition for a variety of purposes, among these are the use of sound to identify the position of objects in their surrounding environment. This is limited not just to localising sound emitting objects, but also obstacles and environmental boundaries, thanks to their ability to extract information from reverberation and sound reflections- all of which can contribute to effective and safe navigation, as well as serving a function in certain assistive technologies thanks to the advent of binaural auditory virtual reality. It is known that head movements in the presence of sound elicit changes in the acoustical signals which arrive at each ear, and these changes can improve common auditory localisation problems in headphone-based auditory virtual reality, such as front-to-back reversals. The goal of the work presented here is to investigate whether the visually impaired naturally engage head movement to facilitate auditory perception and to what extent it may be applicable to the design of virtual auditory assistive technology. Three novel experiments are presented; a field study of head movement behaviour during navigation, a questionnaire assessing the self-reported use of head movement in auditory perception by visually impaired individuals (each comparing visually impaired and sighted participants) and an acoustical analysis of inter-aural differences and cross- correlations as a function of head angle and sound source distance. It is found that visually impaired people self-report using head movement for auditory distance perception. This is supported by head movements observed during the field study, whilst the acoustical analysis showed that interaural correlations for sound sources within 5m of the listener were reduced as head angle or distance to sound source were increased, and that interaural differences and correlations in reflected sound were generally lower than that of direct sound. Subsequently, relevant guidelines for designers of assistive auditory virtual reality are proposed

    Self-motion facilitates echo-acoustic orientation in humans

    Get PDF
    The ability of blind humans to navigate complex environments through echolocation has received rapidly increasing scientific interest. However, technical limitations have precluded a formal quantification of the interplay between echolocation and self-motion. Here, we use a novel virtual echo-acoustic space technique to formally quantify the influence of self-motion on echo-acoustic orientation. We show that both the vestibular and proprioceptive components of self-motion contribute significantly to successful echo-acoustic orientation in humans: specifically, our results show that vestibular input induced by whole-body self-motion resolves orientation-dependent biases in echo-acoustic cues. Fast head motions, relative to the body, provide additional proprioceptive cues which allow subjects to effectively assess echo-acoustic space referenced against the body orientation. These psychophysical findings clearly demonstrate that human echolocation is well suited to drive precise locomotor adjustments. Our data shed new light on the sensory-motor interactions, and on possible optimization strategies underlying echolocation in humans

    Incorporation of three-dimensional audio into virtual reality scenes for an improved immersive experience

    Get PDF
    Audio is a crucial aspect to bear in mind when designing virtual reality applications, as it can add a whole new level of immersion to this kind of experiences if properly used. In order to create realistic sound, it is essential to take audio spatialization into consideration, providing the information necessary for an individual to estimate the position of sound sources and the characteristics of surrounding spaces. This project proposes implementing spatial audio in virtual reality scenes created with a game engine, as well as providing all of the theoretical bases that explain how this can be ultimately achieved. It first touches upon how the human auditory system is able to estimate the direction and distance to an audio source by interpreting cues such as time and level differences between ears, pinnae reflections, reverberation and general variations in loudness. Next, the limited spatial properties present in the most common audio reproduction systems are discussed, arguing why they are insufficient for virtual reality applications. Two spatial audio recording and reproduction techniques for headphones and loudspeakers are presented as alternatives for virtual reality scenarios in which the user remains static. As a means of acquiring the knowledge necessary to understand more advanced spatial audio systems, the concept known as Head Related Transfer Function or HRTF is introduced in great detail. It is explained how HRTFs encompass all physical cues that condition sound localization, as well as how the frequency responses that characterize them can be experimentally measured and used for artificial spatialization of virtual sources. Several HRTF-based spatial audio systems are presented, differentiating between those that apply HRTFs as mathematical models and those that make use of experimental impulse response data sets. These advanced models are the way to go if spatial audio is to be applied to virtual reality experiences that involve user motion, as they are capable of constantly adapting to the user’s position and direction relative to the present virtual sources. The rest of the project focuses on how some of the mentioned HRTF-based spatial audio systems can be implemented in the Unity game engine. The poor built-in spatialization options the main software offers can be complemented and greatly improved with the use of audio plugins that perform HRTF filtering and introduce features such as sound occlusion, room simulation models and sound directivity patterns. Three demos with different levels of complexity are finally carried out in Unity in order to showcase the virtues of spatial audio in virtual reality applications

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF
    otorhinolaryngology; neurosciences; hearin

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF
    ​The International Symposium on Hearing is a prestigious, triennial gathering where world-class scientists present and discuss the most recent advances in the field of human and animal hearing research. The 2015 edition will particularly focus on integrative approaches linking physiological, psychophysical and cognitive aspects of normal and impaired hearing. Like previous editions, the proceedings will contain about 50 chapters ranging from basic to applied research, and of interest to neuroscientists, psychologists, audiologists, engineers, otolaryngologists, and artificial intelligence researchers.

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF
    • …
    corecore