12 research outputs found

    Auditory mechanisms involved in psychoacoustical intensity discrimination in quiet and in noise

    No full text
    In order to represent the variety of sounds we encounter in our daily lives, it is critical that our auditory systems remain responsive to changes in sound level across a broad range of levels. In the peripheral auditory system, there are many mechanisms that may pose a limitation for intensity discrimination in quiet and background noise. These include limited dynamic ranges of individual and groups of auditory nerve fibers, basilar-membrane compression, and neural adaptation (for tone in the presence of noise). However, there are other mechanisms that can help overcome these limitations. These include spread of excitation cues for narrowband stimuli (in quiet), suppression (for a tone in noise), and the medial olivocochlear reflex (MOCR) (for a tone in a long noise). How all these mechanisms may contribute to intensity discrimination abilities in humans is not well understood. Psychoacoustically, intensity discrimination abilities can be measured using a paradigm in which a tone called a pedestal is incremented in level. The smallest level increment a listener can detect is called an intensity discrimination limen or IDL. If IDLs are measured for short, high frequency tones, poorer IDLs are seen for mid-level tones than for low-level or high-level tones. There is evidence that this so-called mid-level hump reflects the limitation of basilar-membrane compression, which is overcome at high levels by the use of spread of excitation cues. However, some researchers propose instead that the mid-level hump originates more centrally in the auditory system. In Chapter 2, characteristics of the mid-level hump were compared to psychoacoustical estimates of basilar-membrane compression in the same listeners. Results supported the idea that the initial worsening in IDLs with increasing pedestal level reflects the decrease in basilar membrane input/output function slope. However, there were also differences across listeners consistent with central influences on intensity discrimination abilities. Previous psychoacoustical studies have used notched noise (NN) to restrict the use of off-frequency listening. For tones at the mid-level hump, if the NN onset begins at least 50 ms prior to the onset of the pedestal, the IDL improves. This result may be consistent with activity of the MOCR, a sluggish, bilateral mechanism which can reduce effects of basilar-membrane compression and can reduce effects of neural adaptation. However, some researchers propose that a central mechanism—profile analysis—may be why the mid-level hump decreases with NN. In Chapter 3, IDLs at the mid-level hump were examined in forward, simultaneous, and backward NN of different durations and levels. These conditions were designed to separately test MOCR, suppression, and profile analysis mechanisms. Results showed improvements in IDLs with NN relative to quiet which were consistent with a suppression mechanism in some listeners and an MOCR mechanism in other listeners. No listeners showed results consistent with a benefit from profile analysis. Another test of the MOCR is to measure IDLs with contralateral noise because the MOCR is a bilateral reflex. Previous physiological and modeling studies suggest that one role of the MOCR is to counteract the limiting effects of neural adaptation (brought about by the noise). However, the MOCR may also reduce the influence of compression for stimuli where basilar-membrane compression dominates. In Chapter 4, IDLs at the mid-level hump were measured in ipsilateral, contralateral, and bilateral broadband noise of different durations and levels. The results showed that in some listeners, ipsilateral noise led to improved IDLs (relative to quiet). Contralateral noise did not lead to improved IDLs for a tone in quiet. However, long contralateral noise led to improved IDLs for a tone in the presence of long ipsilateral noise. These results are consistent with MOCR activity which may reduce the limiting effects of neural adaptation in noise and can also reduce the limiting effects of compression relative to quiet. Overall, these results provide perceptual evidence of the interplay of mechanisms that serve as limitations for discriminability over a wide range of levels and those that help overcome these limitations. Ultimately, mechanisms that aid in maintaining discriminability help the auditory system represent contrasts among stimuli such as speech in the presence of background noise

    sj-pdf-1-tia-10.1177_23312165241229572 - Supplemental material for Externalization of Speech When Listening With Hearing Aids

    No full text
    Supplemental material, sj-pdf-1-tia-10.1177_23312165241229572 for Externalization of Speech When Listening With Hearing Aids by Virginia Best and Elin Roverud in Trends in Hearing</p

    The time course of cochlear gain reduction measured using a more efficient psychophysical technique1

    No full text
    In a previous study it was shown that an on-frequency precursor intended to activate the medial olivocochlear reflex (MOCR) at the signal frequency reduces the gain estimated from growth-of-masking (GOM) functions. This is called the temporal effect (TE). In Expt. 1 a shorter method of measuring this change in gain is established. GOM functions were measured with an on- and off-frequency precursor presented before the masker and signal, and used to estimate Input∕Output functions. The change in gain estimated in this way was very similar to that estimated from comparing two points measured with a single fixed masker level on the lower legs of the GOM functions. In Expt. 2, the TE was measured as a function of precursor duration and signal delay. For short precursor durations and short delays the TE increased (buildup) or remained constant as delay increased, then decreased. The TE also increased with precursor duration for the shortest delay. The results were fitted with a model based on the time course of the MOCR. The model fitted the data well, and predicted the buildup. This buildup is not consistent with exponential decay predicted by neural adaptation or persistence of excitation

    A “Buildup” of Speech Intelligibility in Listeners With Normal Hearing and Hearing Loss

    No full text
    The perception of simple auditory mixtures is known to evolve over time. For instance, a common example of this is the “buildup” of stream segregation that is observed for sequences of tones alternating in pitch. Yet very little is known about how the perception of more complicated auditory scenes, such as multitalker mixtures, changes over time. Previous data are consistent with the idea that the ability to segregate a target talker from competing sounds improves rapidly when stable cues are available, which leads to improvements in speech intelligibility. This study examined the time course of this buildup in listeners with normal and impaired hearing. Five simultaneous sequences of digits, varying in length from three to six digits, were presented from five locations in the horizontal plane. A synchronized visual cue at one location indicated which sequence was the target on each trial. We observed a buildup in digit identification performance, driven primarily by reductions in confusions between the target and the maskers, that occurred over the course of three to four digits. Performance tended to be poorer in listeners with hearing loss; however, there was only weak evidence that the buildup was diminished or slowed in this group

    A Flexible Question-and-Answer Task for Measuring Speech Understanding

    No full text
    This report introduces a new speech task based on simple questions and answers. The task differs from a traditional sentence recall task in that it involves an element of comprehension and can be implemented in an ongoing fashion. It also contains two target items (the question and the answer) that may be associated with different voices and locations to create dynamic listening scenarios. A set of 227 questions was created, covering six broad categories (days of the week, months of the year, numbers, colors, opposites, and sizes). All questions and their one-word answers were spoken by 11 female and 11 male talkers. In this study, listeners were presented with question-answer pairs and asked to indicate whether the answer was true or false. Responses were given as simple button or key presses, which are quick to make and easy to score. Two preliminary experiments are presented that illustrate different ways of implementing the basic task. In the first experiment, question-answer pairs were presented in speech-shaped noise, and performance was compared across subjects, question categories, and time, to examine the different sources of variability. In the second experiment, sequences of question-answer pairs were presented amidst competing conversations in an ongoing, spatially dynamic listening scenario. Overall, the question-and-answer task appears to be feasible and could be implemented flexibly in a number of different ways

    Informational Masking in Normal-Hearing and Hearing-Impaired Listeners Measured in a Nonspeech Pattern Identification Task

    No full text
    Individuals with sensorineural hearing loss (SNHL) often experience more difficulty with listening in multisource environments than do normal-hearing (NH) listeners. While the peripheral effects of sensorineural hearing loss certainly contribute to this difficulty, differences in central processing of auditory information may also contribute. To explore this issue, it is important to account for peripheral differences between NH and these hearing-impaired (HI) listeners so that central effects in multisource listening can be examined. In the present study, NH and HI listeners performed a tonal pattern identification task at two distant center frequencies (CFs), 850 and 3500 Hz. In an attempt to control for differences in the peripheral representations of the stimuli, the patterns were presented at the same sensation level (15 dB SL), and the frequency deviation of the tones comprising the patterns was adjusted to obtain equal quiet pattern identification performance across all listeners at both CFs. Tonal sequences were then presented at both CFs simultaneously (informational masking conditions), and listeners were asked either to selectively attend to a source (CF) or to divide attention between CFs and identify the pattern at a CF designated after each trial. There were large differences between groups in the frequency deviations necessary to perform the pattern identification task. After compensating for these differences, there were small differences between NH and HI listeners in the informational masking conditions. HI listeners showed slightly greater performance asymmetry between the low and high CFs than did NH listeners, possibly due to central differences in frequency weighting between groups

    The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task

    No full text
    The aim of this study was to evaluate the performance of a visually guided hearing aid (VGHA) under conditions designed to capture some aspects of “real-world” communication settings. The VGHA uses eye gaze to steer the acoustic look direction of a highly directional beamforming microphone array. Although the VGHA has been shown to enhance speech intelligibility for fixed-location, frontal targets, it is currently not known whether these benefits persist in the face of frequent changes in location of the target talker that are typical of conversational turn-taking. Participants were 14 young adults, 7 with normal hearing and 7 with bilateral sensorineural hearing impairment. Target stimuli were sequences of 12 question–answer pairs that were embedded in a mixture of competing conversations. The participant’s task was to respond via a key press after each answer indicating whether it was correct or not. Spatialization of the stimuli and microphone array processing were done offline using recorded impulse responses, before presentation over headphones. The look direction of the array was steered according to the eye movements of the participant as they followed a visual cue presented on a widescreen monitor. Performance was compared for a “dynamic” condition in which the target stimulus moved between three locations, and a “fixed” condition with a single target location. The benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced in the dynamic condition, largely because visual fixation was less accurate

    The importance of a broad bandwidth for understanding “glimpsed” speech

    No full text
    International audienc
    corecore