583 research outputs found

    Communications Biophysics

    Get PDF
    Contains reports on two research projects.National Institutes of Health (Grant 1 PO1 GM-14940-02)Joint Services Electronics Programs (U. S. Army, U. S. Navy, and U. S. Air Force) under Contract DA 28-043-AMC-02536(E

    The role of recovered envelope cues in the identification of temporal-fine-structure speech for hearing-impaired listeners

    Get PDF
    Narrowband speech can be separated into fast temporal cues [temporal fine structure (TFS)], and slow amplitude modulations (envelope). Speech processed to contain only TFS leads to envelope recovery through cochlear filtering, which has been suggested to account for TFS-speech intelligibility for normal-hearing listeners. Hearing-impaired listeners have deficits with TFS-speech identification, but the contribution of recovered-envelope cues to these deficits is unknown. This was assessed for hearing-impaired listeners by measuring identification of disyllables processed to contain TFS or recovered-envelope cues. Hearing-impaired listeners performed worse than normal-hearing listeners, but TFS-speech intelligibility was accounted for by recovered-envelope cues for both groups.National Institutes of Health (U.S.) (Grant R43 DC013006)National Institutes of Health (U.S.) (Grant R01 DC00117

    Dissociation of Detection and Discrimination of Pure Tones following Bilateral Lesions of Auditory Cortex

    Get PDF
    It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5±2.1 dB in the left ear and 6.5±1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6±0.22 dB; right ear: 1.7±0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.United States. National Institutes of Health (DC03328)United States. National Institutes of Health (DC006353)United States. National Institutes of Health (DC00117)United States. National Institutes of Health (T32-DC00038

    Masking Release for Igbo and English

    Get PDF
    In this research, we explored the effect of noise interruption rate on speech intelligibility. Specifically, we used the Hearing In Noise Test (HINT) procedure with the original HINT stimuli (English) and Igbo stimuli to assess speech reception ability in interrupted noise. For a given noise level, the HINT test provides an estimate of the signal-to-noise ratio (SNR) required for 50%-correct speech intelligibility. The SNR for 50%-correct intelligibility changes depending upon the interruption rate of the noise. This phenomenon (called Masking Release) has been studied extensively in English but not for Igbo – which is an African tonal language spoken predominantly in South Eastern Nigeria. This experiment explored and compared the phenomenon of Masking Release for (i) native English speakers listening to English, (ii) native Igbo speakers listening to English, and (iii) native Igbo speakers listening to Igbo. Since Igbo is a tonal language and English is a non-tonal language, this allowed us to compare Masking Release patterns on native speakers of tonal and non-tonal languages. Our results for native English speakers listening to English HINT show that the SNR and the masking release are orderly and consistent with other English HINT data for English speakers. Our result for Igbo speakers listening to English HINT sentences show that there is greater variability in results across the different Igbo listeners than across the English listeners. This result likely reflects different levels of ability in the English language across the Igbo listeners. The masking release values in dB are less than for English listeners. Our results for Igbo speakers listening to Igbo show that in general, the SNRs for Igbo sentences are lower than for English/English and Igbo/English. This means that the Igbo listeners could understand 50% of the Igbo sentences at SNRs less than those required for English sentences by either native or non-native listeners. This result can be explained by the fact that the perception of Igbo utterances by Igbo subjects may have been aided by the prediction of tonal and vowel harmony features existent in the Igbo language. In agreement with other studies, our results also show that in a noisy environment listeners are able to perceive their native language better than a second language. The ability of native language speakers to perceive their language better than a second language in a noisy environment may be attributed to the fact that: a) Native speakers are more familiar with the sounds of their language than second language speakers. b) One of the features of language is that it is predictable hence even in noise a native speaker may be able to predict a succeeding word that is scarcely audible. These contextual effects are facilitated by familiarity.National Institutes of Health (U.S.) (Grant R01 DC00117

    Level variations in speech: Effect on masking release in hearing-impaired listeners

    Get PDF
    Acoustic speech is marked by time-varying changes in the amplitude envelope that may pose difficulties for hearing-impaired listeners. Removal of these variations (e.g., by the Hilbert transform) could improve speech reception for such listeners, particularly in fluctuating interference. Léger, Reed, Desloge, Swaminathan, and Braida [(2015b). J. Acoust. Soc. Am. 138, 389–403] observed that a normalized measure of masking release obtained for hearing-impaired listeners using speech processed to preserve temporal fine-structure (TFS) cues was larger than that for unprocessed or envelope-based speech. This study measured masking release for two other speech signals in which level variations were minimal: peak clipping and TFS processing of an envelope signal. Consonant identification was measured for hearing-impaired listeners in backgrounds of continuous and fluctuating speech-shaped noise. The normalized masking release obtained using speech with normal variations in overall level was substantially less than that observed using speech processed to achieve highly restricted level variations. These results suggest that the performance of hearing-impaired listeners in fluctuating noise may be improved by signal processing that leads to a decrease in stimulus level variations.National Institutes of Health (U.S.) (R01DC000117

    Consonant identification using temporal fine structure and recovered envelope cues

    Get PDF
    The contribution of recovered envelopes (RENVs) to the utilization of temporal-fine structure (TFS) speech cues was examined in normal-hearing listeners. Consonant identification experiments used speech stimuli processed to present TFS or RENV cues. Experiment 1 examined the effects of exposure and presentation order using 16-band TFS speech and 40-band RENV speech recovered from 16-band TFS speech. Prior exposure to TFS speech aided in the reception of RENV speech. Performance on the two conditions was similar (∼50%-correct) for experienced listeners as was the pattern of consonant confusions. Experiment 2 examined the effect of varying the number of RENV bands recovered from 16-band TFS speech. Mean identification scores decreased as the number of RENV bands decreased from 40 to 8 and were only slightly above chance levels for 16 and 8 bands. Experiment 3 examined the effect of varying the number of bands in the TFS speech from which 40-band RENV speech was constructed. Performance fell from 85%- to 31%-correct as the number of TFS bands increased from 1 to 32. Overall, these results suggest that the interpretation of previous studies that have used TFS speech may have been confounded with the presence of RENVs.National Institutes of Health (U.S.) (Grant R01 DC00117)National Institutes of Health (U.S.) (Grant R43 DC013006

    Auditory and tactile gap discrimination by observers with normal and impaired hearing

    Get PDF
    Temporal processing ability for the senses of hearing and touch was examined through the measurement of gap-duration discrimination thresholds (GDDTs) employing the same low-frequency sinusoidal stimuli in both modalities. GDDTs were measured in three groups of observers (normal-hearing, hearing-impaired, and normal-hearing with simulated hearing loss) covering an age range of 21–69 yr. GDDTs for a baseline gap of 6 ms were measured for four different combinations of 100-ms leading and trailing markers (250–250, 250–400, 400–250, and 400–400 Hz). Auditory measurements were obtained for monaural presentation over headphones and tactile measurements were obtained using sinusoidal vibrations presented to the left middle finger. The auditory GDDTs of the hearing-impaired listeners, which were larger than those of the normal-hearing observers, were well-reproduced in the listeners with simulated loss. The magnitude of the GDDT was generally independent of modality and showed effects of age in both modalities. The use of different-frequency compared to same-frequency markers led to a greater deterioration in auditory GDDTs compared to tactile GDDTs and may reflect differences in bandwidth properties between the two sensory systems.National Institute on Deafness and Other Communication Disorders (U.S.) (Grant R01 DC000117

    Two-microphone spatial filtering provides speech reception benefits for cochlear implant users in difficult acoustic environments

    Get PDF
    This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution.National Institutes of Health (U.S.) (Grant R01 DC 000117)National Institutes of Health (U.S.) (Grant R01 DC DC7152)National Institutes of Health (U.S.) (Grant 2R44DC010524-02

    Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing

    Get PDF
    Purpose: Improved speech recognition in binaurally combined acoustic–electric stimulation (otherwise known as bimodal hearing) could arise when listeners integrate speech cues from the acoustic and electric hearing. The aims of this study were (a) to identify speech cues extracted in electric hearing and residual acoustic hearing in the low-frequency region and (b) to investigate cochlear implant (CI) users' ability to integrate speech cues across frequencies. Method: Normal-hearing (NH) and CI subjects participated in consonant and vowel identification tasks. Each subject was tested in 3 listening conditions: CI alone (vocoder speech for NH), hearing aid (HA) alone (low-pass filtered speech for NH), and both. Integration ability for each subject was evaluated using a model of optimal integration—the PreLabeling integration model (Braida, 1991). Results: Only a few CI listeners demonstrated bimodal benefit for phoneme identification in quiet. Speech cues extracted from the CI and the HA were highly redundant for consonants but were complementary for vowels. CI listeners also exhibited reduced integration ability for both consonant and vowel identification compared with their NH counterparts. Conclusion: These findings suggest that reduced bimodal benefits in CI listeners are due to insufficient complementary speech cues across ears, a decrease in integration ability, or both.National Organization for Hearing ResearchNational Institute on Deafness and Other Communication Disorders (U.S.) (Grant R03 DC009684-01)National Institute on Deafness and Other Communication Disorders (U.S.) (Grant R01 DC007152-02
    • …
    corecore