484 research outputs found

    Spatial Hearing with Simultaneous Sound Sources: A Psychophysical Investigation

    Get PDF
    This thesis provides an overview of work conducted to investigate human spatial hearing in situations involving multiple concurrent sound sources. Much is known about spatial hearing with single sound sources, including the acoustic cues to source location and the accuracy of localisation under different conditions. However, more recently interest has grown in the behaviour of listeners in more complex environments. Concurrent sound sources pose a particularly difficult problem for the auditory system, as their identities and locations must be extracted from a common set of sensory receptors and shared computational machinery. It is clear that humans have a rich perception of their auditory world, but just how concurrent sounds are processed, and how accurately, are issues that are poorly understood. This work attempts to fill a gap in our understanding by systematically examining spatial resolution with multiple sound sources. A series of psychophysical experiments was conducted on listeners with normal hearing to measure performance in spatial localisation and discrimination tasks involving more than one source. The general approach was to present sources that overlapped in both frequency and time in order to observe performance in the most challenging of situations. Furthermore, the role of two primary sets of location cues in concurrent source listening was probed by examining performance in different spatial dimensions. The binaural cues arise due to the separation of the two ears, and provide information about the lateral position of sound sources. The spectral cues result from location-dependent filtering by the head and pinnae, and allow vertical and front-rear auditory discrimination. Two sets of experiments are described that employed relatively simple broadband noise stimuli. In the first of these, two-point discrimination thresholds were measured using simultaneous noise bursts. It was found that the pair could be resolved only if a binaural difference was present; spectral cues did not appear to be sufficient. In the second set of experiments, the two stimuli were made distinguishable on the basis of their temporal envelopes, and the localisation of a designated target source was directly examined. Remarkably robust localisation was observed, despite the simultaneous masker, and both binaural and spectral cues appeared to be of use in this case. Small but persistent errors were observed, which in the lateral dimension represented a systematic shift away from the location of the masker. The errors can be explained by interference in the processing of the different location cues. Overall these experiments demonstrated that the spatial perception of concurrent sound sources is highly dependent on stimulus characteristics and configurations. This suggests that the underlying spatial representations are limited by the accuracy with which acoustic spatial cues can be extracted from a mixed signal. Three sets of experiments are then described that examined spatial performance with speech, a complex natural sound. The first measured how well speech is localised in isolation. This work demonstrated that speech contains high-frequency energy that is essential for accurate three-dimensional localisation. In the second set of experiments, spatial resolution for concurrent monosyllabic words was examined using similar approaches to those used for the concurrent noise experiments. It was found that resolution for concurrent speech stimuli was similar to resolution for concurrent noise stimuli. Importantly, listeners were limited in their ability to concurrently process the location-dependent spectral cues associated with two brief speech sources. In the final set of experiments, the role of spatial hearing was examined in a more relevant setting containing concurrent streams of sentence speech. It has long been known that binaural differences can aid segregation and enhance selective attention in such situations. The results presented here confirmed this finding and extended it to show that the spectral cues associated with different locations can also contribute. As a whole, this work provides an in-depth examination of spatial performance in concurrent source situations and delineates some of the limitations of this process. In general, spatial accuracy with concurrent sources is poorer than with single sound sources, as both binaural and spectral cues are subject to interference. Nonetheless, binaural cues are quite robust for representing concurrent source locations, and spectral cues can enhance spatial listening in many situations. The findings also highlight the intricate relationship that exists between spatial hearing, auditory object processing, and the allocation of attention in complex environments

    Spatial Hearing with Simultaneous Sound Sources: A Psychophysical Investigation

    Get PDF
    This thesis provides an overview of work conducted to investigate human spatial hearing in situations involving multiple concurrent sound sources. Much is known about spatial hearing with single sound sources, including the acoustic cues to source location and the accuracy of localisation under different conditions. However, more recently interest has grown in the behaviour of listeners in more complex environments. Concurrent sound sources pose a particularly difficult problem for the auditory system, as their identities and locations must be extracted from a common set of sensory receptors and shared computational machinery. It is clear that humans have a rich perception of their auditory world, but just how concurrent sounds are processed, and how accurately, are issues that are poorly understood. This work attempts to fill a gap in our understanding by systematically examining spatial resolution with multiple sound sources. A series of psychophysical experiments was conducted on listeners with normal hearing to measure performance in spatial localisation and discrimination tasks involving more than one source. The general approach was to present sources that overlapped in both frequency and time in order to observe performance in the most challenging of situations. Furthermore, the role of two primary sets of location cues in concurrent source listening was probed by examining performance in different spatial dimensions. The binaural cues arise due to the separation of the two ears, and provide information about the lateral position of sound sources. The spectral cues result from location-dependent filtering by the head and pinnae, and allow vertical and front-rear auditory discrimination. Two sets of experiments are described that employed relatively simple broadband noise stimuli. In the first of these, two-point discrimination thresholds were measured using simultaneous noise bursts. It was found that the pair could be resolved only if a binaural difference was present; spectral cues did not appear to be sufficient. In the second set of experiments, the two stimuli were made distinguishable on the basis of their temporal envelopes, and the localisation of a designated target source was directly examined. Remarkably robust localisation was observed, despite the simultaneous masker, and both binaural and spectral cues appeared to be of use in this case. Small but persistent errors were observed, which in the lateral dimension represented a systematic shift away from the location of the masker. The errors can be explained by interference in the processing of the different location cues. Overall these experiments demonstrated that the spatial perception of concurrent sound sources is highly dependent on stimulus characteristics and configurations. This suggests that the underlying spatial representations are limited by the accuracy with which acoustic spatial cues can be extracted from a mixed signal. Three sets of experiments are then described that examined spatial performance with speech, a complex natural sound. The first measured how well speech is localised in isolation. This work demonstrated that speech contains high-frequency energy that is essential for accurate three-dimensional localisation. In the second set of experiments, spatial resolution for concurrent monosyllabic words was examined using similar approaches to those used for the concurrent noise experiments. It was found that resolution for concurrent speech stimuli was similar to resolution for concurrent noise stimuli. Importantly, listeners were limited in their ability to concurrently process the location-dependent spectral cues associated with two brief speech sources. In the final set of experiments, the role of spatial hearing was examined in a more relevant setting containing concurrent streams of sentence speech. It has long been known that binaural differences can aid segregation and enhance selective attention in such situations. The results presented here confirmed this finding and extended it to show that the spectral cues associated with different locations can also contribute. As a whole, this work provides an in-depth examination of spatial performance in concurrent source situations and delineates some of the limitations of this process. In general, spatial accuracy with concurrent sources is poorer than with single sound sources, as both binaural and spectral cues are subject to interference. Nonetheless, binaural cues are quite robust for representing concurrent source locations, and spectral cues can enhance spatial listening in many situations. The findings also highlight the intricate relationship that exists between spatial hearing, auditory object processing, and the allocation of attention in complex environments

    Effect of Reverberation Context on Spatial Hearing Performance of Normally Hearing Listeners

    Get PDF
    Previous studies provide evidence that listening experience in a particular reverberant environment improves speech intelligibility and localization performance in that environment. Such studies, however, are few, and there is little knowledge of the underlying mechanisms. The experiments presented in this thesis explored the effect of reverberation context, in particular, the similarity in interaural coherence within a context, on listeners\u27 performance in sound localization, speech perception in a spatially separated noise, spatial release from speech-on-speech masking, and target location identification in a multi-talker configuration. All experiments were conducted in simulated reverberant environments created with a loudspeaker array in an anechoic chamber. The reflections comprising the reverberation in each environment had the same temporal and relative amplitude patterns, but varied in their lateral spread, which affected the interaural coherence of reverberated stimuli. The effect of reverberation context was examined by comparing performance in two reverberation contexts, mixed and fixed. In the mixed context, the reverberation environment applied to each stimulus varied trial-by-trial, whereas in the fixed context, the reverberation environment was held constant within a block of trials. In Experiment I (absolute judgement of sound location), variability in azimuth judgments was lower in the fixed than in the mixed context, suggesting that sound localization depended not only on the cues presented in isolated trials. In Experiment II, the intelligibility of speech in a spatially separated noise was found to be similar in both reverberation contexts. That result contrasts with other studies, and suggests that the fixed context did not assist listeners in compensating for degraded interaural coherence. In Experiment III, speech intelligibility in multi-talker configurations was found to be better in the fixed context, but only when the talkers were separated. That is, the fixed context improved spatial release from masking. However, in the presence of speech maskers, consistent reverberation did not improve the localizability of the target talker in a three-alternative location-identification task. Those results suggest that in multi-talker situations, consistent coherence may not improve target localizability, but rather that consistent context may facilitate the buildup of spatial selective attention

    The effect of an active transcutaneous bone conduction device on spatial release from masking

    Get PDF
    Objective: The aim was to quantify the effect of the experimental active transcutaneous Bone Conduction Implant (BCI) on spatial release from masking (SRM) in subjects with bilateral or unilateral conductive and mixed hearing loss. Design: Measurements were performed in a sound booth with five loudspeakers at 0\ub0, +/−30\ub0 and +/−150\ub0 azimuth. Target speech was presented frontally, and interfering speech from either the front (co-located) or surrounding (separated) loudspeakers. SRM was calculated as the difference between the separated and the co-located speech recognition threshold (SRT). Study Sample: Twelve patients (aged 22–76 years) unilaterally implanted with the BCI were included. Results: A positive SRM, reflecting a benefit of spatially separating interferers from target speech, existed for all subjects in unaided condition, and for nine subjects (75%) in aided condition. Aided SRM was lower compared to unaided in nine of the subjects. There was no difference in SRM between patients with bilateral and unilateral hearing loss. In aided situation, SRT improved only for patients with bilateral hearing loss. Conclusions: The BCI fitted unilaterally in patients with bilateral or unilateral conductive/mixed hearing loss seems to reduce SRM. However, data indicates that SRT is improved or maintained for patients with bilateral and unilateral hearing loss, respectively

    Measuring Spatial Hearing Abilities in Listeners with Simulated Unilateral Hearing Loss

    Get PDF
    Spatial hearing is the ability to use auditory cues to determine the location, direction, and distance of sound in space. Listeners with unilateral hearing loss (UHL) typically have difficulty understanding speech in the presence of competing sound; this is likely due to the lack of access to spatial cues. The assessment of spatial hearing abilities in individuals with UHL is of growing clinical interest, particularly for everyday listening environments. Current approaches used to measure spatial hearing abilities include Spatial Release from Masking (SRM), Binaural Intelligibility Level Difference (BILD), and Listening in Spatialized Noise-Sentences (LiSN-S) test. Spatial Release from Masking is the improvement in speech recognition thresholds (SRT) when the target and masker are co-located as opposed to when they are spatially separated, utilizing a sound-field setup. The LiSN-S test also measures improvement in SRTs when the target and masker are spatially separated. Although similar, the LiSN-S utilizes a more clinically assessable procedure by simulating a three-dimensional auditory environment under headphones. Akin to the LiSN-S, the BILD also utilizes headphones but instead elicits improved SRTs by presenting target speech 180° out-of-phase to one ear instead of in-phase to two ears. The purposes of this study were (a) to determine if patterns of individual variability were similar across the three measures for 30 adults with normal hearing and 28 adults with simulated UHL and (b) to evaluate the effects of simulated UHL on performance. Results of this study confirmed the three tests were all sensitive measures of binaural hearing deficits in participants with UHL. Although all measures were correlated with each other, only the measures conducted under headphones (BILD and LiSN-S) were influenced by magnitude of asymmetry. These findings suggested that although the measures were producing similar results, they might be reflecting different aspects of binaural processing

    Multifaceted evaluation of a binaural cochlear‐ implant sound‐processing strategy inspired by the medial olivocochlear reflex

    Get PDF
    [ES]El objetivo de esta tesis es evaluar experimentalmente la audición de los usuarios de implantes cocleares con una estrategia de procesamiento binaural de sonidos inspirada en el reflejo olivococlear medial, denominada "estrategia MOC". La tesis describe cuatro estudios dirigidos a comparar la inteligibilidad del habla en ruido, la localización de fuentes sonoras y el esfuerzo auditivo con procesadores de sonido estándar y con diversos procesadores MOC diseñados para reflejar de forma más o menos realista el tiempo de activación del reflejo olivococlear medial natural y sus efectos sobre la comprensión coclear humana

    Informed Sound Source Localization for Hearing Aid Applications

    Get PDF

    Audio Decision Support for Supervisory Control of Unmanned Vehicles : Literature Review

    Get PDF
    Purpose of this literature review: To survey scholarly articles, books and other sources (dissertations, conference proceedings) relevant to the use of the audio supervisory control of unmanned vehicles.Prepared for Charles River Analytic

    Effects of Coordinated Bilateral Hearing Aids and Auditory Training on Sound Localization

    Get PDF
    This thesis has two main objectives: 1) evaluating the benefits of the bilateral coordination of the hearing aid Digital Signal Processing (DSP) features by measuring and comparing the auditory performance with and without the activation of this coordination, and 2) evaluating the benefits of acclimatization and auditory training on such auditory performance and, determining whether receiving training in one aspect of auditory performance (sound localization) would generalize to an improvement in another aspect of auditory performance (speech intelligibility in noise), and to what extent. Two studies were performed. The first study evaluated the speech intelligibility in noise and horizontal sound localization abilities in HI listeners using hearing aids that apply bilateral coordination of WDRC. A significant improvement was noted in sound localization with bilateral coordination on when compared to off, while speech intelligibility in noise did not seem to be affected. The second study was an extension of the first study, with a suitable period for acclimatization provided and then the participants were divided into training and control groups. Only the training group received auditory training. The training group performance was significantly better than the control group performance in some conditions, in both the speech intelligibility and the localization tasks. The bilateral coordination did not have significant effects on the results of the second study. This work is among the early literature to investigate the impact of bilateral coordination in hearing aids on the users’ auditory performance. Also, this work is the first to demonstrate the effect of auditory training in sound localization on the speech intelligibility performance
    corecore