416 research outputs found

    The relation between speech recognition in noise and the speech-evoked brainstem response in normal-hearing and hearing-impaired individuals

    Get PDF
    Little is known about the way speech in noise is processed along the auditory pathway. The purpose of this study was to evaluate the relation between listening in noise using the R-Space system and the neurophysiologic response of the speech-evoked auditory brainstem when recorded in quiet and noise in adult participants with mild to moderate hearing loss and normal hearing

    Impact of Noise and Working Memory on Speech Processing in Adults With and Without ADHD

    Get PDF
    Auditory processing of speech is influenced by internal (i.e., attention, working memory) and external factors (i.e., background noise, visual information). This study examined the interplay among these factors in individuals with and without ADHD. All participants completed a listening in noise task, two working memory capacity tasks, and two short-term memory tasks. The listening in noise task had both an auditory and an audiovisual condition. Participants included 38 young adults between the ages of 18-35 without ADHD and 25 young adults between the ages of 18-35 with ADHD. Results indicated that diagnosis, modality, and signal-to-noise ratio all have a main effect on a person\u27s ability to process speech in noise. In addition, the interaction between the diagnosis of ADHD, the presence of visual cues, and the level of noise had an effect on a person\u27s ability to process speech in noise. In fact, young adults with ADHD benefit less from visual information during noise than young adults without ADHD, an effect influenced by working memory abilities. These speech processing results are discussed in relation to theoretical models of stochastic resonance and working memory capacity. Implications for speech-language pathologists and educators are also discussed

    Does Talker Familiarity or Time of Testing Facilitate Sentence Recognition When Listening in Noise?

    Get PDF
    At the most elementary level, the speech signal is comprised of two parts: linguistic information and indexical information. The linguistic information is the phonetic information of the signal and indexical information is speaker specific and is the paralinguistic information of the signal. Part of this indexical information is talker specific characteristics; which have been shown to help people understand speech. The talker specific characteristic we looked at was talker familiarity. Talker familiarity has been shown to help babies segment speech and adults listen in noise and recall stories. We looked at talker familiarity to see if it would benefit typically developing adults listen in ecologically valid background noise. Our hypotheses were: two significant main effects and interaction. Our study had two independent variables; talker (familiar, novel) and time of testing (Time 1, Time 2) and the dependent variable was keyword accuracy. A total of 93 individuals participated in this study; 41 of which were familiar with the talker due to the talker being their university professor. Our results showed a main effect of talker and a main effect of time of testing but there was no interaction between talker and time of testing. Implications are discussed

    Contralateral inhibition of click- and chirp-evoked human compound action potentials

    Get PDF
    Cochlear outer hair cells (OHC) receive direct efferent feedback from the caudal auditory brainstem via the medial olivocochlear (MOC) bundle. This circuit provides the neural substrate for the MOC reflex, which inhibits cochlear amplifier gain and is believed to play a role in listening in noise and protection from acoustic overexposure. The human MOC reflex has been studied extensively using otoacoustic emissions (OAE) paradigms; however, these measurements are insensitive to subsequent “downstream” efferent effects on the neural ensembles that mediate hearing. In this experiment, click- and chirp-evoked auditory nerve compound action potential (CAP) amplitudes were measured electrocochleographically from the human eardrum without and with MOC reflex activation elicited by contralateral broadband noise. We hypothesized that the chirp would be a more optimal stimulus for measuring neural MOC effects because it synchronizes excitation along the entire length of the basilar membrane and thus evokes a more robust CAP than a click at low to moderate stimulus levels. Chirps produced larger CAPs than clicks at all stimulus intensities (50–80 dB ppeSPL). MOC reflex inhibition of CAPs was larger for chirps than clicks at low stimulus levels when quantified both in terms of amplitude reduction and effective attenuation. Effective attenuation was larger for chirp- and click-evoked CAPs than for click-evoked OAEs measured from the same subjects. Our results suggest that the chirp is an optimal stimulus for evoking CAPs at low stimulus intensities and for assessing MOC reflex effects on the auditory nerve. Further, our work supports previous findings that MOC reflex effects at the level of the auditory nerve are underestimated by measures of OAE inhibition

    Speech Recognition in Noise and Intonation Recognition in Primary-School-Aged Children, and Preliminary Results in Children with Cochlear Implants

    Get PDF
    Fundamental frequency (F0), or voice pitch, is an important acoustic cue for speech intonation and is perceived most accurately through the fine spectral resolution of the normal human auditory system. However, relatively little is known about how young children process F0-based speech intonation cues. The fine spectral resolution required for F0 information has also been shown to be beneficial for listening in noise, a skill that normally-hearing children are required to use on a daily basis. While it is known that hearing-impaired adults with cochlear implants are at a disadvantage for intonation recognition and listening in noise following loss of fine spectral structure cues, relatively little is known about how young children with unilateral cochlear implants perform in these situations. The goal of the current study was to quantify how a group of twenty normally-hearing children (6-8 years of age) perform in a listening-in-noise task and in a speech intonation recognition task. These skills were also measured in a small group of 5 children of similar age with unilateral cochlear implants (all implanted prior to the age of five). The cochlear implant participants in this study presumably had reduced spectral information, and it was hypothesized that this would be manifested as performance differences between groups. In the listening-in-noise task, sentence recognition was measured in the presence of a single-talker masker at different signal-to-noise ratios. Results indicated that the participants with cochlear implants achieved significantly lower scores than the normally-hearing participants. In the intonation recognition task, listeners heard re-synthesized versions of a single bisyllabic word ("popcorn") with systematically varying F0 contours, and indicated whether the speaker was "asking" or "telling" (i.e., question-like or statement-like). Both groups of children were able to use the F0 cue to perform the task, and no significant differences between the groups were observed. Although limited in scope, the results suggest that children who receive their cochlear implant before the age of five have significantly more difficulty understanding speech in noise than their normally-hearing peers. However, the two populations appear to be equally able to use F0 cues to determine speech intonation patterns

    A study on the relationship between the intelligibility and quality of algorithmically-modified speech for normal hearing listeners

    Get PDF
    This study investigates the relationship between the intelligibility and quality of modified speech in noise and in quiet. Speech signals were processed by seven algorithms designed to increase speech intelligibility in noise without altering speech intensity. In three noise maskers, including both stationary and fluctuating noise at two signal-to-noise ratios (SNR), listeners identified keywords from unmodified or modified sentences. The intelligibility performance of each type of speech was measured as the listeners’ word recognition rate in each condition, while the quality was rated as a mean opinion score. In quiet, only the perceptual quality of each type of speech was assessed. The results suggest that when listening in noise, modification performance on improving intelligibility is more important than its potential negative impact on speech quality. However, when listening in quiet or at SNRs in which intelligibility is no longer an issue to listeners, the impact to speech quality due to modification becomes a concern

    Tuned In: An Investigation of the Use of Group Amplification Systems for Students, Including Those on the Autism Spectrum, in First Grade Mainstream Classrooms

    Get PDF
    The purpose of this study was to determine the academic benefits and challenges, if any, of utilizing a group amplification system in first-grade mainstream classrooms. More specifically, this study measured the influence of a group amplification system throughout language-based tasks, such as spelling accuracy. A total of 33 first-grade students, including two students reportedly diagnosed with an Autism Spectrum Disorder (ASD), participated in the study, with 17 students in Classroom A and 16 students in Classroom B. This study’s experimental procedures included a spelling pretest, two intervention activities, and a spelling posttest, administered over the course of four days. The spelling pretest was comprised of 10 grade-level words, and was administered to each classroom without the use of an amplification device. Intervention activities had students create tongue twisters, as well as play a “Spelling Word” Bingo game. The spelling posttest was comprised of the same 10 grade-level words. During the intervention and posttest procedures, the researcher made use of a group amplification system in Classroom A, while Classroom B did not as a control measure. Overall, students in Classroom A demonstrated significant increases in change scores from spelling pre- to posttest measures when compared to Classroom B. In addition, the use of a group amplification system appeared to positively impact students in Classroom A through improvement of the signal-to-noise ratio. Findings suggest the use of this hearing assistive technology was effective in the first-grade mainstream classroom

    Bone conductive implants in single sided deafness

    Get PDF
    Conclusion: The Bone Conductive Implants (BCI) showed to partly restore some of the functions lost when the binaural hearing is missing, such as in the single-sided deafness (SSD) subjects. The adoption of the single BCI needs to be advised by the clinician on the ground of a thorough counselling with the SSD subject. Objectives: To perform an overview of the present possibilities of BCI in SSD and to evaluate the reliability of the audiological evaluation for assessing the speech recognition in noise and the sound localization cues, as major problems related to the loss of binaural hearing. Method: Nine SSD subjects who underwent BCI implantation underwent a pre-operative audiological evaluation, consisting in the soundfield speech audiometry, as word recognition score (WRS) and sound localization, in quiet and in noise. Moreover, they were also tested for the accuracy of directional word recognition in noise and with the subjective evaluation with APHAB questionnaire. Results: The mean maximum percentage of word discrimination was 65.5% in the unaided condition and 78.9% in the BCI condition. The sound localization in noise with the BCI was better than the unaided condition, especially when stimulus and noise were on the same side of the implanted ear. The accuracy of directional word recognition showed to improve with BCI in respect to the unaided condition, in the BCI side, with either the stimulus on the implanted ear and the noise in the contralateral ear, or when both stimulus and noise were deliver to implanted ear

    Treatment of Auditory Processing in Noise in Individuals With Mild Aphasia: Pilot Study

    Get PDF
    Purpose: Listening in noise challenges listeners with auditory comprehension impairments in aphasia. We examined the effects of Trivia Game, a computerized program with questions spoken in increasing levels of background noise with success in the game. Methods: We piloted Trivia Game in four individuals with chronic aphasia and mild auditory comprehension impairments. Participants played Trivia Game for 12 twenty-minute sessions. In addition to the Western Aphasia Battery (WAB), we measured outcomes on Quick Speech in Noise (QSIN), a sentence repetition test, administered in auditory (AUD) and auditory+visual (AV) conditions as signal-to-noise ratio varied from 25 to 0 dB. Results: All four participants showed progress within the game in the noise level attained. Increases in repetition accuracy were seen in two participants for the QSIN AUD condition (average of 5.5 words), and in three participants for QSIN AV (average of 16.5 words). One individual increased performance on the WAB. Conclusions: Use of Trivia Game led to improved auditory processing abilities in all four individuals with aphasia. Greater gains noted in the AV condition over AUD suggest that Trivia Game may facilitate speech-reading skills to support comprehension of speech in situations with background noise

    Advantage Accented? Listener Differences in Understanding Speech in Noise

    Get PDF
    Cross dialectal communication results in poorer performance than within-dialect communication in a variety of listening tasks. However, some listeners appear to be less affected than others, and this paper explores the factors behind interlistener variation in a listening in noise task. 63 native speakers of American English transcribed 120 HINT sentences, which were presented mixed with noise at -3dB SNR. The sentences had been recorded by six young males: two speakers of Standard American English (SAE), two speakers of Southern American English (STH), and two non-native, L1-Chinese speakers (NNS). Participants were asked to transcribe what they heard as best they could, and were scored on keywords correct. While everyone did much worse with NNS than SAE and STH, participants who self-reported being accented did significantly significantly better with STH (and trend better with NNS) than those who reported being unaccented. Additionally, participants who reported being in a good mood did significantly better with SAE sentences than speakers who reported being in a bad mood. Finally, there was a main effect of extraversion, such that extraverts did worse overall than introverts. The results suggest that individual differences account for some of the interlistener variation in cross-dialectal listening task, and exploring these metrics further may help us understand the cognitive mechanisms involved in processing unfamiliar dialects
    • …
    corecore