333 research outputs found

    Consonant identification using temporal fine structure and recovered envelope cues

    Get PDF
    The contribution of recovered envelopes (RENVs) to the utilization of temporal-fine structure (TFS) speech cues was examined in normal-hearing listeners. Consonant identification experiments used speech stimuli processed to present TFS or RENV cues. Experiment 1 examined the effects of exposure and presentation order using 16-band TFS speech and 40-band RENV speech recovered from 16-band TFS speech. Prior exposure to TFS speech aided in the reception of RENV speech. Performance on the two conditions was similar (∼50%-correct) for experienced listeners as was the pattern of consonant confusions. Experiment 2 examined the effect of varying the number of RENV bands recovered from 16-band TFS speech. Mean identification scores decreased as the number of RENV bands decreased from 40 to 8 and were only slightly above chance levels for 16 and 8 bands. Experiment 3 examined the effect of varying the number of bands in the TFS speech from which 40-band RENV speech was constructed. Performance fell from 85%- to 31%-correct as the number of TFS bands increased from 1 to 32. Overall, these results suggest that the interpretation of previous studies that have used TFS speech may have been confounded with the presence of RENVs.National Institutes of Health (U.S.) (Grant R01 DC00117)National Institutes of Health (U.S.) (Grant R43 DC013006

    The impact of directional listening on perceived localization ability

    Get PDF
    An important purpose of hearing is to aid communication. Because hearing-in-noise is of primary importance to individuals who seek remediation for hearing impairment, it has been the primary objective of advances in technology. Directional microphone technology is the most promising way to address this problem. Another important role of hearing is localization, allowing one to sense one's environment and feel safe and secure. The properties of the listening environment that are altered with directional microphone technology have the potential to significantly impair localization ability. The purpose of this investigation was to determine the impact of listening with directional microphone technology on individuals' self-perceived level of localization disability and concurrent handicap. Participants included 57 unaided subjects, later randomly assigned to participate in one of three aided groups of 19 individuals each, who used omni-directional microphone only amplification, directional microphone only amplification, or toggle-switch equipped hearing aids that allowed user discretion over the directional microphone properties of the instruments. Comparisons were made between the unaided group responses and those of the subjects after having worn amplification for three months. Additionally, comparisons between the directional microphone only group responses and each of the other two aided groups' responses were made. No significant differences were found. Hearing aids with omni-directional microphones, directional-only microphones, and those that are equipped with a toggle-switch, neither increased nor decreased the self-perceived level of ability to tell the location of sound or the level of withdrawal from situations where localization ability was a factor. Concurrently, directional-microphone only technology did not significantly worsen or improve these factors as compared to the other two microphone configurations. Future research should include objective measures of localization ability using the same paradigm employed herein. If the use of directional microphone technology has an objective impact on localization, clinicians might be advised to counsel their patients to be careful moving in their environment even though they do not perceive a problem with localization. If ultimately no significant differences in either objective or subjective measures are found, then concern over decreases in quality of life and safety with directional microphone use need no longer be considered

    Auditory Filters Measured at Neighboring Center Frequencies

    Get PDF
    Auditory filters were derived in 20 normal-hearing human listeners at center frequencies (CFs) of 913, 1095, 3651, and 4382 Hz using the roex (p,r) method. Comparisons were made between slopes of the filters\u27 skirts at the neighboring CFs with filter output levels of 45 and 70 dB. The same comparisons were made with regard to filter equivalent rectangular bandwidth (ERB). In the 1000-Hz region, the low-frequency slopes (Pl) of filters centered at 913 and 1095 Hz were significantly correlated at both stimulus levels, while the high-frequency slopes (Pu) were similar only at the high test level. In the 4000-Hz region, for sinusoids of 3651 and 4382 Hz, the level effect was clearer as both Pu and Pl values diverged at the low level but were related at high levels. The ERBs centered at the same CFs displayed a similar level dependence. At the stimulus level most likely to be affected by an active feedback mechanism, auditory filters centered at nearly the same frequency displayed quite distinct frequency selectivity, and this trend was stronger in the 4000-Hz region than the 1000-Hz region. The findings suggest that a saturating, active cochlear mechanism may not be distributed evenly, or contribute to peripheral tuning with equal effectiveness throughout the length of the partition

    Simulation of the effects of sensorineural hearing loss

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (leaves 103-112).by Isaac John Graf.M.S

    The effects of hearing impairment on the ability to glimpse speech in a spectro-temporally complex noise

    Get PDF
    The aim of this project was to investigate the effects of hearing impairment on speech perception in spectro-temporally complex noise. The specific objective of the project was to psychophysically and computationally assess speech reception in the presence of a masker that fluctuates both in time and frequency. The experiments were designed to compare hearing-impaired and normal-hearing listeners on a task which has been shown to highlight the effect of spread of masking. Through dichotic stimulation, a previous study had shown a sizeable benefit when compared to monaural stimulation. Experiment 1 tested normal-hearing and hearing-impaired listeners on consonant recognition in the presence of an asynchronously modulated noise. We tested the primary hypotheses that spread of masking reduces available glimpsing opportunities for hearing-impaired listeners, and that removing spread of masking enhances performance relative to normal-hearing listeners. Results showed greater masking release in normal-hearing listeners compared to hearing-impaired listeners, but all listeners achieved some benefit of reducing the effects of spread of masking. Experiment 2 tested consonant recognition in similar masking conditions as Experiment 1, testing normal-hearing listeners with simulated reduced audibility and reduced frequency resolution. We tested the primary hypothesis that reduced audibility is not the only limiting factor for hearing-impaired listeners to glimpse speech, but rather, that reduced frequency resolution also plays an important role in the ability to glimpse speech in spectro-temporally complex noise. Results showed that while reduced audibility was a key factor, reduced frequency resolution also contributes to deficits seen in Experiment 1. Experiment 3 tested a computational glimpsing model. We tested the hypotheses that spectral resolution plays a key role in glimpsing for both normal-hearing and hearing-impaired listeners; by analyzing dichotically presented stimuli, the model was expected to predict the benefit seen in the behavioral data. Results indicated that the behavioral data could be accurately predicted by the model, although in some cases, the model out-performed listeners with simulated hearing loss. These studies contribute to a better understanding of factors responsible for hearing-impaired listeners' reduced ability to follow speech in complex backgrounds, with implications for auditory prosthesis design.Doctor of Philosoph

    Physiology-based model of multi-source auditory processing

    Full text link
    Our auditory systems are evolved to process a myriad of acoustic environments. In complex listening scenarios, we can tune our attention to one sound source (e.g., a conversation partner), while monitoring the entire acoustic space for cues we might be interested in (e.g., our names being called, or the fire alarm going off). While normal hearing listeners handle complex listening scenarios remarkably well, hearing-impaired listeners experience difficulty even when wearing hearing-assist devices. This thesis presents both theoretical work in understanding the neural mechanisms behind this process, as well as the application of neural models to segregate mixed sources and potentially help the hearing impaired population. On the theoretical side, auditory spatial processing has been studied primarily up to the midbrain region, and studies have shown how individual neurons can localize sounds using spatial cues. Yet, how higher brain regions such as the cortex use this information to process multiple sounds in competition is not clear. This thesis demonstrates a physiology-based spiking neural network model, which provides a mechanism illustrating how the auditory cortex may organize up-stream spatial information when there are multiple competing sound sources in space. Based on this model, an engineering solution to help hearing-impaired listeners segregate mixed auditory inputs is proposed. Using the neural model to perform sound-segregation in the neural domain, the neural outputs (representing the source of interest) are reconstructed back to the acoustic domain using a novel stimulus reconstruction method.2017-09-22T00:00:00

    Spectral Integration and Bandwidth Effects on Speech Recognition in School-Aged Children and Adults

    Get PDF
    Previous studies have shown that adult listeners are more adept than child listeners at identifying spectrally-degraded speech. However, the development of the ability to combine speech information from different frequency regions has received little previous attention. The purpose of the present study was to determine the effect of age on the bandwidth necessary to achieve a relatively low criterion level of speech recognition for two frequency bands, then to determine the improvement in speech recognition that resulted when both speech bands were present simultaneously

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF
    • …
    corecore