40 research outputs found

    Talker Adaptation and Lexical Difficulty Impact Word Recognition in Adults with Cochlear Implants

    Get PDF
    Introduction: Talker-specific adaptation facilitates speech recognition in normal-hearing listeners. This study examined talker adaptation in adult cochlear implant (CI) users. Three hypotheses were tested: (1) high-performing adult CI users show improved word recognition following exposure to a talker ("talker adaptation"), particularly for lexically hard words, (2) individual performance is determined by auditory sensitivity and neurocognitive skills, and (3) individual performance relates to real-world functioning. Methods: Fifteen high-performing, post-lingually deaf adult CI users completed a word recognition task consisting of 6 single-talker blocks (3 female/3 male native English speakers); words were lexically "easy" and "hard." Recognition accuracy was assessed "early" and "late" (first vs. last 10 trials); adaptation was assessed as the difference between late and early accuracy. Participants also completed measures of spectral-temporal processing and neurocognitive skills, as well as real-world measures of multiple-talker sentence recognition and quality of life (QoL). Results: CI users showed limited talker adaptation overall, but performance improved for lexically hard words. Stronger spectral-temporal processing and neurocognitive skills were weakly to moderately associated with more accurate word recognition and greater talker adaptation for hard words. Finally, word recognition accuracy for hard words was moderately related to multiple-talker sentence recognition and QoL. Conclusion: Findings demonstrate a limited talker adaptation benefit for recognition of hard words in adult CI users. Both auditory sensitivity and neurocognitive skills contribute to performance, suggesting additional benefit from adaptation for individuals with stronger skills. Finally, processing differences related to talker adaptation and lexical difficulty may be relevant to real-world functioning

    Noise-Vocoded Sentence Recognition and the Use of Context in Older and Younger Adult Listeners

    Get PDF
    Purpose: When listening to speech under adverse conditions, older adults, even with “age-normal” hearing, face challenges that may lead to poorer speech recognition than their younger peers. Older listeners generally demon-strate poorer suprathreshold auditory processing along with aging-related declines in neurocognitive functioning that may impair their ability to compen-sate using “top-down” cognitive–linguistic functions. This study explored top-down processing in older and younger adult listeners, specifically the use of semantic context during noise-vocoded sentence recognition. Method: Eighty-four adults with age-normal hearing (45 young normal-hearing [YNH] and 39 older normal-hearing [ONH] adults) participated. Participants were tested for recognition accuracy for two sets of noise-vocoded sentence mate-rials: one that was semantically meaningful and the other that was syntactically appropriate but semantically anomalous. Participants were also tested for hearing ability and for neurocognitive functioning to assess working memory capac-ity, speed of lexical access, inhibitory control, and nonverbal fluid reasoning, as well as vocabulary knowledge. Results: The ONH and YNH listeners made use of semantic context to a similar extent. Nonverbal reasoning predicted recognition of both meaningful and anomalous sentences, whereas pure-tone average contributed additionally to anomalous sentence recognition. None of the hearing, neurocognitive, or language measures significantly predicted the amount of context gain, computed as the difference score between meaningful and anomalous sentence recogni-tion. However, exploratory cluster analyses demonstrated four listener profiles and suggested that individuals may vary in the strategies used to recognize speech under adverse listening conditions. Conclusions: Older and younger listeners made use of sentence context to similar degrees. Nonverbal reasoning was found to be a contributor to noise-vocoded sentence recognition. However, different listeners may approach the problem of recognizing meaningful speech under adverse conditions using different strategies based on their hearing, neurocognitive, and language profiles. These findings provide support for the complexity of bottom-up and top-down interactions during speech recognition under adverse listening conditions.</p

    Speech and Language Outcomes in Adults and Children with Cochlear Implants

    Get PDF
    Cochlear implants (CIs) represent a significant engineering and medical milestone in the treatment of hearing loss for both adults and children. In this review, we provide a brief overview of CI technology, describe the benefits that CIs can provide to adults and children who receive them, and discuss the specific limitations and issues faced by CI users. We emphasize the relevance of CIs to the linguistics community by demonstrating how CIs successfully provide access to spoken language. Furthermore, CI research can inform our basic understanding of spoken word recognition in adults and spoken language development in children. Linguistics research can also help us address the major clinical issue of outcome variability and motivate the development of new clinical tools to assess the unique challenges of adults and children with CIs, as well as novel interventions for individuals with poor outcomes

    Noise-Vocoded Sentence Recognition and the Use of Context in Older and Younger Adult Listeners

    Get PDF
    Purpose: When listening to speech under adverse conditions, older adults, even with “age-normal” hearing, face challenges that may lead to poorer speech recognition than their younger peers. Older listeners generally demon-strate poorer suprathreshold auditory processing along with aging-related declines in neurocognitive functioning that may impair their ability to compen-sate using “top-down” cognitive–linguistic functions. This study explored top-down processing in older and younger adult listeners, specifically the use of semantic context during noise-vocoded sentence recognition. Method: Eighty-four adults with age-normal hearing (45 young normal-hearing [YNH] and 39 older normal-hearing [ONH] adults) participated. Participants were tested for recognition accuracy for two sets of noise-vocoded sentence mate-rials: one that was semantically meaningful and the other that was syntactically appropriate but semantically anomalous. Participants were also tested for hearing ability and for neurocognitive functioning to assess working memory capac-ity, speed of lexical access, inhibitory control, and nonverbal fluid reasoning, as well as vocabulary knowledge. Results: The ONH and YNH listeners made use of semantic context to a similar extent. Nonverbal reasoning predicted recognition of both meaningful and anomalous sentences, whereas pure-tone average contributed additionally to anomalous sentence recognition. None of the hearing, neurocognitive, or language measures significantly predicted the amount of context gain, computed as the difference score between meaningful and anomalous sentence recogni-tion. However, exploratory cluster analyses demonstrated four listener profiles and suggested that individuals may vary in the strategies used to recognize speech under adverse listening conditions. Conclusions: Older and younger listeners made use of sentence context to similar degrees. Nonverbal reasoning was found to be a contributor to noise-vocoded sentence recognition. However, different listeners may approach the problem of recognizing meaningful speech under adverse conditions using different strategies based on their hearing, neurocognitive, and language profiles. These findings provide support for the complexity of bottom-up and top-down interactions during speech recognition under adverse listening conditions.</p

    The Perception of Regional Dialects and Foreign Accents by Cochlear Implant Users

    Get PDF
    Purpose: This preliminary research examined (a) the perception of two common sources of indexical variability in speech & mdash;regional dialects and foreign accents, and (b) the relation between indexical processing and sentence recognition among prelingually deaf, long-term cochlear implant (CI) users and normal-hearing (NH) peers. Method: Forty-three prelingually deaf adolescent and adult CI users and 44 NH peers completed a regional dialect categorization task, which consisted of identifying the region of origin of an unfamiliar talker from six dialect regions of the United States. They also completed an intelligibility rating task, which consisted of rating the intelligibility of short sentences produced by native and nonnative (foreign-accented) speakers of American English on a scale from 1 (not intelligible at all) to 7 (very intelligible). Individual performance was compared to demographic factors and sentence recognition scores. Results: Both CI and NH groups demonstrated difficulty with regional dialect categorization, but NH listeners significantly outperformed the CI users. In the intelligibility rating task, both CI and NH listeners rated foreign-accented sentences as less intelligible than native sentences; however, CI users perceived smaller differences in intelligibility between native and foreign-accented sentences. Sensitivity to accent differences was related to sentence recognition accuracy in CI users. Conclusions: Prelingually deaf, long-term CI users are sensitive to accent variability in speech, but less so than NH peers. Additionally, individual differences in CI users & rsquo; sensitivity to indexical variability was related to sentence recognition abilities, suggesting a common source of difficulty in the perception and encoding of fine acoustic & ndash; phonetic details in speech

    High- and Low-Performing Adult Cochlear Implant Users on High-Variability Sentence Recognition:Differences in Auditory Spectral Resolution and Neurocognitive Functioning

    Get PDF
    Background Postlingually deafened adult cochlear implant (CI) users routinely display large individual differences in the ability to recognize and understand speech, especially in adverse listening conditions. Although individual differences have been linked to several sensory ("bottom-up'') and cognitive ("top-down'') factors, little is currently known about the relative contributions of these factors in high- and low-performing CI users. Purpose The aim of the study was to investigate differences in sensory functioning and neurocognitive functioning between high- and low-performing CI users on the Perceptually Robust English Sentence Test Open-set (PRESTO), a high-variability sentence recognition test containing sentence materials produced by multiple male and female talkers with diverse regional accents. Research Design CI users with accuracy scores in the upper (HiPRESTO) or lower quartiles (LoPRESTO) on PRESTO in quiet completed a battery of behavioral tasks designed to assess spectral resolution and neurocognitive functioning. Study Sample Twenty-one postlingually deafened adult CI users, with 11 HiPRESTO and 10 LoPRESTO participants. Data Collection and Analysis A discriminant analysis was carried out to determine the extent to which measures of spectral resolution and neurocognitive functioning discriminate HiPRESTO and LoPRESTO CI users. Auditory spectral resolution was measured using the Spectral-Temporally Modulated Ripple Test (SMRT). Neurocognitive functioning was assessed with visual measures of working memory (digit span), inhibitory control (Stroop), speed of lexical/phonological access (Test of Word Reading Efficiency), and nonverbal reasoning (Raven's Progressive Matrices). Results HiPRESTO and LoPRESTO CI users were discriminated primarily by performance on the SMRT and secondarily by the Raven's test. No other neurocognitive measures contributed substantially to the discriminant function. Conclusions High- and low-performing CI users differed by spectral resolution and, to a lesser extent, nonverbal reasoning. These findings suggest that the extreme groups are determined by global factors of richness of sensory information and domain-general, nonverbal intelligence, rather than specific neurocognitive processing operations related to speech perception and spoken word recognition. Thus, although both bottom-up and top-down information contribute to speech recognition performance, low-performing CI users may not be sufficiently able to rely on neurocognitive skills specific to speech recognition to enhance processing of spectrally degraded input in adverse conditions involving high talker variability

    Bottom-Up Signal Quality Impacts the Role of Top-Down Cognitive-Linguistic Processing During Speech Recognition by Adults with Cochlear Implants

    Get PDF
    HYPOTHESES: Significant variability persists in speech recognition outcomes in adults with cochlear implants (CIs). Sensory ("bottom-up") and cognitive-linguistic ("top-down") processes help explain this variability. However, the interactions of these bottom-up and top-down factors remain unclear. One hypothesis was tested: top-down processes would contribute differentially to speech recognition, depending on the fidelity of bottom-up input. BACKGROUND: Bottom-up spectro-temporal processing, assessed using a Spectral-Temporally Modulated Ripple Test (SMRT), is associated with CI speech recognition outcomes. Similarly, top-down cognitive-linguistic skills relate to outcomes, including working memory capacity, inhibition-concentration, speed of lexical access, and nonverbal reasoning. METHODS: Fifty-one adult CI users were tested for word and sentence recognition, along with performance on the SMRT and a battery of cognitive-linguistic tests. The group was divided into "low-," "intermediate-," and "high-SMRT" groups, based on SMRT scores. Separate correlation analyses were performed for each subgroup between a composite score of cognitive-linguistic processing and speech recognition. RESULTS: Associations of top-down composite scores with speech recognition were not significant for the low-SMRT group. In contrast, these associations were significant and of medium effect size (Spearman's rho = 0.44-0.46) for two sentence types for the intermediate-SMRT group. For the high-SMRT group, top-down scores were associated with both word and sentence recognition, with medium to large effect sizes (Spearman's rho = 0.45-0.58). CONCLUSIONS: Top-down processes contribute differentially to speech recognition in CI users based on the quality of bottom-up input. Findings have clinical implications for individualized treatment approaches relying on bottom-up device programming or top-down rehabilitation approaches
    corecore