331 research outputs found

    Bilateral cochlear implantation or bimodal listening in the paediatric population : retrospective analysis of decisive criteria

    Get PDF
    Introduction: In children with bilateral severe to profound hearing loss, bilateral hearing can be achieved by either bimodal stimulation (CIHA) or bilateral cochlear implantation (BICI). The aim of this study was to analyse the audiologic test protocol that is currently applied to make decisions regarding the bilateral hearing modality in the paediatric population. Methods: Pre- and postoperative audiologic test results of 21 CIHA, 19 sequential BICI and 12 simultaneous BICI children were examined retrospectively. Results: Deciding between either simultaneous BICI or unilateral implantation was mainly based on the infant's preoperative Auditory Brainstem Response thresholds. Evolution from CIHA to sequential BICI was mainly based on the audiometric test results in the contralateral (hearing aid) ear after unilateral cochlear implantation. Preoperative audiometric thresholds in the hearing aid ear were significantly better in CIHA versus sequential BICI children (p < 0.001 and p = 0.001 in unaided and aided condition, respectively). Decisive values obtained in the hearing aid ear in favour of BICI were: An average hearing threshold measured at 0.5, 1, 2 and 4 kHz of at least 93 dB HL without, and at least 52 dB HL with hearing aid together with a 40% aided speech recognition score and a 70% aided score on the phoneme discrimination subtest of the Auditory Speech Sounds Evaluation test battery. Conclusions: Although pure tone audiometry offers no information about bimodal benefit, it remains the most obvious audiometric evaluation in the decision process on the mode of bilateral stimulation in the paediatric population. A theoretical test protocol for adequate evaluation of bimodal benefit in the paediatric population is proposed

    Bottom-Up Signal Quality Impacts the Role of Top-Down Cognitive-Linguistic Processing During Speech Recognition by Adults with Cochlear Implants

    Get PDF
    HYPOTHESES: Significant variability persists in speech recognition outcomes in adults with cochlear implants (CIs). Sensory ("bottom-up") and cognitive-linguistic ("top-down") processes help explain this variability. However, the interactions of these bottom-up and top-down factors remain unclear. One hypothesis was tested: top-down processes would contribute differentially to speech recognition, depending on the fidelity of bottom-up input. BACKGROUND: Bottom-up spectro-temporal processing, assessed using a Spectral-Temporally Modulated Ripple Test (SMRT), is associated with CI speech recognition outcomes. Similarly, top-down cognitive-linguistic skills relate to outcomes, including working memory capacity, inhibition-concentration, speed of lexical access, and nonverbal reasoning. METHODS: Fifty-one adult CI users were tested for word and sentence recognition, along with performance on the SMRT and a battery of cognitive-linguistic tests. The group was divided into "low-," "intermediate-," and "high-SMRT" groups, based on SMRT scores. Separate correlation analyses were performed for each subgroup between a composite score of cognitive-linguistic processing and speech recognition. RESULTS: Associations of top-down composite scores with speech recognition were not significant for the low-SMRT group. In contrast, these associations were significant and of medium effect size (Spearman's rho = 0.44-0.46) for two sentence types for the intermediate-SMRT group. For the high-SMRT group, top-down scores were associated with both word and sentence recognition, with medium to large effect sizes (Spearman's rho = 0.45-0.58). CONCLUSIONS: Top-down processes contribute differentially to speech recognition in CI users based on the quality of bottom-up input. Findings have clinical implications for individualized treatment approaches relying on bottom-up device programming or top-down rehabilitation approaches

    EFFECTS OF AGING ON VOICE-PITCH PROCESSING: THE ROLE OF SPECTRAL AND TEMPORAL CUES

    Get PDF
    Declines in auditory temporal processing are a common consequence of natural aging. Interactions between aging and spectro-temporal pitch processing have yet to be thoroughly investigated in humans, though recent neurophysiologic and electrophysiologic data lend support to the notion that periodicity coding using only unresolved harmonics (i.e., those available via the temporal envelope) is negatively affected as a consequence of age. Individuals with cochlear implants (CIs) must rely on the temporal envelope of speech to glean information about voice pitch [coded through the fundamental frequency (f0)], as spectral f0 cues are not available. While cochlear implants have been shown to be efficacious in older adults, it is hypothesized that they would experience difficulty perceiving spectrally-degraded voice-pitch information. The current experiments were aimed at quantifying the ability of younger and older listeners to utilize spectro-temporal cues to obtain voice pitch information when performing simple and complex auditory tasks. Experiment 1 measured the ability of younger and older NH listeners to perceive a difference in the frequency of amplitude modulated broad-band noise, thereby exploiting only temporal envelope cues to perform the task. Experiment 2 measured age-related differences in f0 difference limens as the degree of spectral degradation was manipulated to approximate CI processing. Results from Experiments 1 and 2 demonstrated that spectro-temporal processing of f0 information in non-speech stimuli is affected in older adults. Experiment 3 showed that age-related performances observed in Experiments 1 and 2 translated to voice gender identification using a natural speech stimulus. Experiment 4 attempted to estimate how younger and older NH listeners are able to utilize differences in voice pitch information in everyday listening environments (i.e., speech in noise) and how such abilities are affected by spectral degradation. Comprehensive results provide further insight on pitch coding in both normal and impaired auditory systems, and demonstrate that spectro-temporal pitch processing is dependent upon the age of the listener. Results could have important implications for elderly cochlear implant recipients

    The musician effect:does it persist under degraded pitch conditions of cochlear implant simulations?

    Get PDF
    Cochlear implants (CIs) are auditory prostheses that restore hearing via electrical stimulation of the auditory nerve. Compared to normal acoustic hearing, sounds transmitted through the CI are spectro-temporally degraded, causing difficulties in challenging listening tasks such as speech intelligibility in noise and perception of music. In normal hearing (NH), musicians have been shown to better perform than non-musicians in auditory processing and perception, especially for challenging listening tasks. This "musician effect" was attributed to better processing of pitch cues, as well as better overall auditory cognitive functioning in musicians. Does the musician effect persist when pitch cues are degraded, as it would be in signals transmitted through a CI? To answer this question, NH musicians and non-musicians were tested while listening to unprocessed signals or to signals processed by an acoustic CI simulation. The task increasingly depended on pitch perception: (1) speech intelligibility (words and sentences) in quiet or in noise, (2) vocal emotion identification, and (3) melodic contour identification (MCI). For speech perception, there was no musician effect with the unprocessed stimuli, and a small musician effect only for word identification in one noise condition, in the CI simulation. For emotion identification, there was a small musician effect for both. For MCI, there was a large musician effect for both. Overall, the effect was stronger as the importance of pitch in the listening task increased. This suggests that the musician effect may be more rooted in pitch perception, rather than in a global advantage in cognitive processing (in which musicians would have performed better in all tasks). The results further suggest that musical training before (and possibly after) implantation might offer some advantage in pitch processing that could partially benefit speech perception, and more strongly emotion and music perception

    Musician effect on perception of spectro-temporally degraded speech, vocal emotion, and music in young adolescents.

    Get PDF
    In adult normal-hearing musicians, perception of music, vocal emotion, and speech in noise has been previously shown to be better than non-musicians, sometimes even with spectro-temporally degraded stimuli. In this study, melodic contour identification, vocal emotion identification, and speech understanding in noise were measured in young adolescent normal-hearing musicians and non-musicians listening to unprocessed or degraded signals. Different from adults, there was no musician effect for vocal emotion identification or speech in noise. Melodic contour identification with degraded signals was significantly better in musicians, suggesting potential benefits from music training for young cochlear-implant users, who experience similar spectro-temporal signal degradation

    Spectral Grouping of Electrically Encoded Sound Predicts Speech-in-Noise Performance in Cochlear Implantees

    Get PDF
    \ua9 2023, The Author(s). Objectives: Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users. Design: Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks. Results: No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users’ speech-in-noise performance that was not explained by spectral and temporal resolution. Conclusion: Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli
    • …
    corecore