48 research outputs found

    Auditory Displays and Assistive Technologies: the use of head movements by visually impaired individuals and their implementation in binaural interfaces

    Get PDF
    Visually impaired people rely upon audition for a variety of purposes, among these are the use of sound to identify the position of objects in their surrounding environment. This is limited not just to localising sound emitting objects, but also obstacles and environmental boundaries, thanks to their ability to extract information from reverberation and sound reflections- all of which can contribute to effective and safe navigation, as well as serving a function in certain assistive technologies thanks to the advent of binaural auditory virtual reality. It is known that head movements in the presence of sound elicit changes in the acoustical signals which arrive at each ear, and these changes can improve common auditory localisation problems in headphone-based auditory virtual reality, such as front-to-back reversals. The goal of the work presented here is to investigate whether the visually impaired naturally engage head movement to facilitate auditory perception and to what extent it may be applicable to the design of virtual auditory assistive technology. Three novel experiments are presented; a field study of head movement behaviour during navigation, a questionnaire assessing the self-reported use of head movement in auditory perception by visually impaired individuals (each comparing visually impaired and sighted participants) and an acoustical analysis of inter-aural differences and cross- correlations as a function of head angle and sound source distance. It is found that visually impaired people self-report using head movement for auditory distance perception. This is supported by head movements observed during the field study, whilst the acoustical analysis showed that interaural correlations for sound sources within 5m of the listener were reduced as head angle or distance to sound source were increased, and that interaural differences and correlations in reflected sound were generally lower than that of direct sound. Subsequently, relevant guidelines for designers of assistive auditory virtual reality are proposed

    Acoustic Speaker Localization with Strong Reverberation and Adaptive Feature Filtering with a Bayes RFS Framework

    Get PDF
    The thesis investigates the challenges of speaker localization in presence of strong reverberation, multi-speaker tracking, and multi-feature multi-speaker state filtering, using sound recordings from microphones. Novel reverberation-robust speaker localization algorithms are derived from the signal and room acoustics models. A multi-speaker tracking filter and a multi-feature multi-speaker state filter are developed based upon the generalized labeled multi-Bernoulli random finite set framework. Experiments and comparative studies have verified and demonstrated the benefits of the proposed methods

    Neural correlates and mechanisms of sounds localization in everyday reverberant settings

    Get PDF
    Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 161-176).Nearly all listening environments-indoors and outdoors alike-are full of boundary surfaces (e.g., walls, trees, and rocks) that produce acoustic reflections. These reflections interfere with the direct sound arriving at a listener's ears, distorting the binaural cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. This thesis addresses fundamental questions regarding the neural basis of sound localization in everyday reverberant environments. In the first set of experiments, we investigate the effects of reverberation on the directional sensitivity of low-frequency auditory neurons sensitive to interaural time differences (ITD), the principal cue for localizing sound containing low frequency energy. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of ITD-sensitive neurons in the auditory midbrain of anesthetized cats and awake rabbits follows a similar time course. However, the tendency of neurons to fire preferentially at the onset of a stimulus results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. To probe the role of temporal response dynamics, we use a conditioning paradigm to systematically alter temporal response patterns of single neurons. Results suggest that making temporal response patterns less onset-dominated typically leads to poorer directional sensitivity in reverberation. In parallel behavioral experiments, we show that human lateralization judgments are consistent with predictions from a population rate model for decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments. In the second part of the thesis we examine the effects of reverberation on directional sensitivity of neurons across the tonotopic axis in the awake rabbit auditory midbrain. We find that reverberation degrades the directional sensitivity of single neurons, although the amount of degradation depends on the characteristic frequency and the type of binaural cues available. When ITD is the only available directional cue, low frequency neurons sensitive to ITD in the fine-time structure maintain better directional sensitivity in reverberation than high frequency neurons sensitive to ITD in the envelope. On the other hand, when both ITD and interaural level differences (ILD) cues are available, directional sensitivity is comparable throughout the tonotopic axis, suggesting that, at high frequencies, ILDs provide better directional information than envelope ITDs in reverberation. These findings can account for results from human psychophysical studies of spatial hearing in reverberant environments. This thesis marks fundamental progress towards elucidating the neural basis for spatial hearing in everyday settings. Overall, our results suggest that the information contained in the rate responses of neurons in the auditory midbrain is sufficient to account for human sound localization in reverberant environments.by Sasha Devore.Ph.D

    The Contribution of Interaural Intensity Differences to the Horizontal Auditory Localization of Narrow Bands of Noise

    Full text link
    Brief bursts of third-octave bands of noise (center frequencies at 0.5, 1.0, 2.0 and 4.0 kHz) and band pass noises with different degrees of low-frequency content (0.5 to 4.0 kHz, 1.0 to 4.0 kHz and 2.0 to 4.0 kHz) were recorded binaurally from 17 different horizontal locations (90 degrees on the left to 90 degrees on the right in 11.25 degree steps) one meter from the ears of an anthropomorphic mannequin (KEMAR) in an anechoic room and a reverberant room. The recorded sounds were processed by attenuating or removing interaural intensity differences and presented to five normally hearing subjects through insert transducers (ER-3A) in a sound-source identification task. The localization accuracy of the subjects for unprocessed signals was similar to that reported in the literature for free-field listening. Auditory localization performance was not significantly degraded by reducing interaural intensity difference cues to 50% of their original value in dB. However, attenuating interaural intensity differences by 100% degraded localization performance by introducing a bias toward the center. The effect was frequency dependent, with no effect for a 0.5 kHz third octave band. Some asymmetries in localization performance were observed. Localization accuracy was similar for signals recorded in a reverberant room as for those recorded in an anechoic room

    The creation of a binaural spatialization tool

    Get PDF
    The main focus of the research presented within this thesis is, as the title suggests, binaural spatialization. Binaural technology and, especially, the binaural recording technique are not particu-larly recent. Nevertheless, the interest in this technology has lately become substantial due to the increase in the calculation power of personal computers, which started to allow the complete and accurate real-time simulation of three-dimensional sound-fields over headphones. The goals of this body of research have been determined in order to provide elements of novelty and of contribution to the state of the art in the field of binaural spatialization. A brief summary of these is found in the following list: • The development and implementation of a binaural spatialization technique with Distance Simulation, based on the individual simulation of the distance cues and Binaural Reverb, in turn based on the weighted mix between the signals convolved with the different HRIR and BRIR sets; • The development and implementation of a characterization process for modifying a BRIR set in order to simulate different environments with different characteristics in terms of frequency response and reverb time; • The creation of a real-time and offline binaural spatialization application, imple-menting the techniques cited in the previous points, and including a set of multichannel(and Ambisonics)-to-binaural conversion tools. • The performance of a perceptual evaluation stage to verify the effectiveness, realism, and quality of the techniques developed, and • The application and use of the developed tools within both scientific and artistic “case studies”. In the following chapters, sections, and subsections, the research performed between January 2006 and March 2010 will be described, outlining the different stages before, during, and after the development of the software platform, analysing the results of the perceptual evaluations and drawing conclusions that could, in the future, be considered the starting point for new and innovative research projects

    Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019

    Get PDF
    International audienc

    Acoustical measurements on stages of nine U.S. concert halls

    Get PDF
    corecore