531 research outputs found

    Binaural Cues for Distance and Direction of Nearby Sound Sources

    Full text link
    To a first-order approximation, binaural localization cues are ambiguous: a number of source locations give rise to nearly the same interaural differences. For sources more than a meter from the listener, binaural localization cues are approximately equal for any source on a cone centered on the interaural axis (i.e., the well-known "cones of confusion"). The current paper analyzes simple geometric approximations of a listener's head to gain insight into localization performance for sources near the listener. In particular, if the head is treated as a rigid, perfect sphere, interaural intensity differences (IIDs) can be broken down into two main components. One component is constant along the cone of confusion (and thus co varies with the interaural time difference, or ITD). The other component is roughly constant for a sphere centered on the interaural axis and depends only on the relative pathlengths from the source to the two ears. This second factor is only large enough to be perceptible when sources are within one or two meters of the listener. These results are not dramatically different if one assumes that the ears are separated by 160 degrees along the surface of the sphere (rather than diametrically opposite one another). Thus, for sources within a meter of the listener, binaural information should allow listeners to locate sources within a volume around a circle centered on the interaural axis, on a "doughnut of confusion." The volume of the doughnut of confusion increases dramatically with angle between source and the interaural axis, degenerating to the entire median plane in the limit.Air Force Office of Scientific Research (F49620-98-1-0108

    Detailed Description The Effects of Room Acoustics on Auditory Spatial Cues

    Get PDF
    Law of the first wavefron

    Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks

    Get PDF
    Listeners with normal hearing thresholds differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials from the scalp (ERPs, reflecting cortical responses to sound), and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with normal hearing thresholds can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands

    An Investigation of the Effects of Categorization and Discrimination Training on Auditory Perceptual Space

    Full text link
    Psychophysical phenomena such as categorical perception and the perceptual magnet effect indicate that our auditory perceptual spaces are warped for some stimuli. This paper investigates the effects of two different kinds of training on auditory perceptual space. It is first shown that categorization training, in which subjects learn to identify stimuli within a particular frequency range as members of the same category, can lead to a decrease in sensitivity to stimuli in that category. This phenomenon is an example of acquired similarity and apparently has not been previously demonstrated for a category-relevant dimension. Discrimination training with the same set of stimuli was shown to have the opposite effect: subjects became more sensitive to differences in the stimuli presented during training. Further experiments investigated some of the conditions that are necessary to generate the acquired similarity found in the first experiment. The results of these experiments are used to evaluate two neural network models of the perceptual magnet effect. These models, in combination with our experimental results, are used to generate an experimentally testable hypothesis concerning changes in the brain's auditory maps under different training conditions.Alfred P. Sloan Foundation and the National institutes of Deafness and other Communication Disorders (R29 02852); Air Force Office of Scientific Research (F49620-98-1-0108

    Identifying where you are in a room: Sensitivity to room acoustics

    Get PDF
    Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.In a spatial auditory display, reverberation provides a reliable cue for source distance, increases the subjective realism of the display, and improves the externalization of simulated sound sources. However, relatively little is known about perceptual sensitivity to differences in reverberation patterns or how precisely reverberation must be simulated in a spatial auditory display. This paper presents preliminary results of a study examining sensitivity to changes in listener location in a simulated room. Results suggest that monaural cues in the ear receiving the least direct-sound energy provide the most salient cues for identifying room location. However, many details in the reverberation pattern are not easily perceived. These results indicate that including reverberation from simplified room models may provide the benefits of reverberation without noticeably degrading the realism of the display

    Perceptual consequences of including reverberation in spatial auditory displays

    Get PDF
    Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.This paper evaluates the perceptual consequences of including reverberation in spatial auditory displays for rapidly-varying signals (obstruent consonants). Preliminary results suggest that the effect of reverberation depends on both syllable position and reverberation characteristics. As many of the non-speech sounds in an auditory display share acoustic features with obstruent consonants, these results are important when designing spatial auditory displays for nonspeech signals as well

    Reducing reversal errors in localizing the source of sound in virtual environment without head tracking

    Get PDF
    International audienceThis paper presents a study about the effect of using additional audio cueing and Head-Related Transfer Function (HRTF) on human performance in sound source localization task without using head movement. The existing techniques of sound spatialization generate reversal errors. We intend to reduce these errors by introducing sensory cues based on sound effects. We conducted and experimental study to evaluate the impact of additional cues in sound source localization task. The results showed the benefit of combining the additional cues and HRTF in terms of the localization accuracy and the reduction of reversal errors. This technique allows significant reduction of reversal errors compared to the use of the HRTF separately. For instance, this technique could be used to improve audio spatial alerting, spatial tracking and target detection in simulation applications when head movement is not included

    Accurate Sound Localization in Reverberant Environments Is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

    Get PDF
    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener's ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments.National Institutes of Health (U.S.) (Grant R01 DC002258)National Institutes of Health (U.S.) (Grant R01 DC05778-02)core National Institutes of Health (U.S.) (Eaton Peabody Laboratory. (Core) Grant P30 DC005209)National Institutes of Health (U.S.) (Grant T32 DC0003
    • …
    corecore