251 research outputs found

    Temporal fine structure processing, pitch and speech perception in cochlear implant recipients

    Get PDF
    Cochlear implant (CI) recipients usually complain about poor speech understanding in the presence of noise. Indeed, they generally show ceiling effects for understanding sentences presented in quiet, but their scores decrease drastically when testing in the presence of competing noise. One important aspect that contributes to speech perception skills, especially when listening in a fluctuating background, has been described as Temporal Fine Structure (TFS) processing. TFS cues are more dominant in conveying Low Frequency (LF) signals linked in particular to Fundamental Frequency (F0), which is crucial for linguistic and musical perception. A§E Harmonic Intonation (HI) and Disharmonic Intonation (DI) are tests of pitch perception in the LF domain and their outcomes are believed to depend on the availability of TFS cues. Previous findings indicated that the DI test provided more differential LF pitch perception outcomes in that it reflected phase locking and TFS processing capacities of the ear, whereas the HI test provided information on its place coding capacity as well. Previous HI/DI studies were mainly done in adult population showing abnormal pitch perception outcomes in CI recipients and there was no or limited data in paediatric population as well as HI/DI outcomes in relation to speech perception outcomes in the presence of noise. One of the primary objectives of this thesis has been to investigate LF pitch perception skills in a group of pediatric CI recipients in comparison to normal hearing (NH) children. Another objective was to introduce a new assessment tool, the Italian STARR test which was based on measurement of speech perception using a roving-level adaptive method where the presentation level of both speech and noise signals varied across sentences. The STARR test attempts to reflect a better representation of real world listening conditions where background noise is usually present and speech intensity varies according to vocal capacity as well as the distance of the speaker. The Italian STARR outcomes in NH adults were studied to produce normative data, as well as to evaluate interlist variability and learning effects. Finally, LF pitch perception outcomes linked to availability of TFS were investigated in a group of adult CI recipients including bimodal users in relation to speech perception, in particular Italian STARR outcomes. Results were interesting: Although the majority of CI recipient children showed abnormal outcomes for A§E, their scores were considerably better than in the adult CI users. Age had a statistically significant effect on performance in both children and adults; younger children and older adults tended to show poorer performance. Similarly, CI recipient adults (even the better performers) showed abnormal STARR outcomes in comparison to NH subjects and group differences were statistically significant. The duration of profound deafness before implantation had a significant effect on STARR performance. On the other hand, the significant effect of CI thresholds re-emphasized the sensitivity of the test to lower level speech which a CI user can face very often during everyday life. Analysis revealed statistically significant correlations between HI/DI and STARR performance. Moreover, contralateral hearing aid users showed significant bimodal benefit for both HI/DI and STARR tests. Overall findings confirmed the usefulness of evaluating both LF pitch and speech perception in order to track changes in TFS sensitivity for CI recipients over time and across different listening conditions which might be provided by future technological advances as well as to study individual differences

    Going Up : exploring ways to improve bimodal auditory functioning

    Get PDF
    Hearing loss has a substantial impact on general health and well-being. For individuals with severe to profound hearing loss, cochlear implants (CI) have become a viable treatment option. Nowadays, many recipients of unilateral CIs have usable residual hearing in the non-implanted ear and wear a hearing aid (HA) in the contralateral ear. The combination of a CI in one ear and a HA in the other ear is called bimodal hearing, aiming to restore binaural hearing as much as possible. In this thesis new ways are investigated to improve bimodal auditory functioning

    Audiovisual speech perception in cochlear implant patients

    Get PDF
    Hearing with a cochlear implant (CI) is very different compared to a normal-hearing (NH) experience, as the CI can only provide limited auditory input. Nevertheless, the central auditory system is capable of learning how to interpret such limited auditory input such that it can extract meaningful information within a few months after implant switch-on. The capacity of the auditory cortex to adapt to new auditory stimuli is an example of intra-modal plasticity — changes within a sensory cortical region as a result of altered statistics of the respective sensory input. However, hearing deprivation before implantation and restoration of hearing capacities after implantation can also induce cross-modal plasticity — changes within a sensory cortical region as a result of altered statistics of a different sensory input. Thereby, a preserved cortical region can, for example, support a deprived cortical region, as in the case of CI users which have been shown to exhibit cross-modal visual-cortex activation for purely auditory stimuli. Before implantation, during the period of hearing deprivation, CI users typically rely on additional visual cues like lip-movements for understanding speech. Therefore, it has been suggested that CI users show a pronounced binding of the auditory and visual systems, which may allow them to integrate auditory and visual speech information more efficiently. The projects included in this thesis investigate auditory, and particularly audiovisual speech processing in CI users. Four event-related potential (ERP) studies approach the matter from different perspectives, each with a distinct focus. The first project investigates how audiovisually presented syllables are processed by CI users with bilateral hearing loss compared to NH controls. Previous ERP studies employing non-linguistic stimuli and studies using different neuroimaging techniques found distinct audiovisual interactions in CI users. However, the precise timecourse of cross-modal visual-cortex recruitment and enhanced audiovisual interaction for speech related stimuli is unknown. With our ERP study we fill this gap, and we present differences in the timecourse of audiovisual interactions as well as in cortical source configurations between CI users and NH controls. The second study focuses on auditory processing in single-sided deaf (SSD) CI users. SSD CI patients experience a maximally asymmetric hearing condition, as they have a CI on one ear and a contralateral NH ear. Despite the intact ear, several behavioural studies have demonstrated a variety of beneficial effects of restoring binaural hearing, but there are only few ERP studies which investigate auditory processing in SSD CI users. Our study investigates whether the side of implantation affects auditory processing and whether auditory processing via the NH ear of SSD CI users works similarly as in NH controls. Given the distinct hearing conditions of SSD CI users, the question arises whether there are any quantifiable differences between CI user with unilateral hearing loss and bilateral hearing loss. In general, ERP studies on SSD CI users are rather scarce, and there is no study on audiovisual processing in particular. Furthermore, there are no reports on lip-reading abilities of SSD CI users. To this end, in the third project we extend the first study by including SSD CI users as a third experimental group. The study discusses both differences and similarities between CI users with bilateral hearing loss and CI users with unilateral hearing loss as well as NH controls and provides — for the first time — insights into audiovisual interactions in SSD CI users. The fourth project investigates the influence of background noise on audiovisual interactions in CI users and whether a noise-reduction algorithm can modulate these interactions. It is known that in environments with competing background noise listeners generally rely more strongly on visual cues for understanding speech and that such situations are particularly difficult for CI users. As shown in previous auditory behavioural studies, the recently introduced noise-reduction algorithm "ForwardFocus" can be a useful aid in such cases. However, the questions whether employing the algorithm is beneficial in audiovisual conditions as well and whether using the algorithm has a measurable effect on cortical processing have not been investigated yet. In this ERP study, we address these questions with an auditory and audiovisual syllable discrimination task. Taken together, the projects included in this thesis contribute to a better understanding of auditory and especially audiovisual speech processing in CI users, revealing distinct processing strategies employed to overcome the limited input provided by a CI. The results have clinical implications, as they suggest that clinical hearing assessments, which are currently purely auditory, should be extended to audiovisual assessments. Furthermore, they imply that rehabilitation including audiovisual training methods may be beneficial for all CI user groups for quickly achieving the most effective CI implantation outcome

    The benefits of acoustic input to combined electric and contralateral acoustic hearing

    Get PDF
    With the extension of cochlear implant candidacy, more and more cochlear-implant listeners fitted with a traditional-long electrode array or a partial-insertion electrode array have residual acoustic hearing either in the nonimplanted ear or in both ears and have shown to receive significant speech-perception benefits from the low-frequency acoustic information provided by residual acoustic hearing. The aim of Experiment 1 was to assess the minimum amount of low-frequency acoustic information that was required to achieve speech-perception benefits both in quiet and in noise from combined electric and contralateral acoustic stimulation (EAS). Speech-recognition performance of consonant-nucleus vowel-consonant (CNC) words in quiet and AzBio sentences in a competing babble noise at +10 dB SNR was evaluated in nine cochlear-implant subjects with residual acoustic hearing in the nonimplanted ear in three listening conditions: acoustic stimulation alone, electric stimulation alone, and combined contralateral EAS. The results showed that adding low-frequency acoustic information to electrically stimulated information led to an overall improvement in speech-recognition performance for both words in quiet and sentences in noise. This improvement was observed even when the acoustic information was limited down to 125 Hz, suggesting that the benefits were primarily due to the voice-pitch information provided by residual acoustic hearing. A further improvement in speech-recognition performance was also observed for sentences in noise, suggesting that part of the improvement in performance was likely due to the improved spectral representation of the first formant. The aims of Experiments 2 and 3 were to investigate the underlying psychophysical mechanisms of the contribution of the acoustic input to electric hearing. Temporal Modulation Transfer Functions (TMTFs) and Spectral Modulation Transfer Functions (SMTFs) were measured in three stimulation conditions: acoustic stimulation alone, electric stimulation alone, and combined contralateral EAS. The results showed that the temporal resolution of acoustic hearing was as good as that of electric hearing and the spectral resolution of acoustic hearing was better than that of electric hearing, suggesting that the speech-perception benefits were attributable to the normal temporal resolution and the better spectral resolution of residual acoustic hearing. The present dissertation research provided important information about the benefits of low-frequency acoustic input added to electric hearing in cochlear-implant listeners with some residual hearing. The overall results reinforced the importance of preserving residual acoustic hearing in cochlear-implant listeners

    Keeping track of emotions:audiovisual integration for emotion recognition and compensation for sensory degradations captured by perceptual strategies

    Get PDF
    The majority of emotional expressions are multimodal and dynamic in nature. Emotion recognition, therefore, requires integration of these multimodal signals. Sensory impairments likely affect emotion recognition, but although sensory impairments are common in older adults, it is unknown how they affect emotion recognition. As more people reach old age, accompanied by an increase in the prevalence of sensory impairments, it is urgent to comprehensively understand audiovisual integration, especially in older individuals. My thesis sought to create a basic understanding of audiovisual integration for emotion recognition and study how audiovisual interactions change with simulated sensory impairments. A secondary aim was to understand how age affects these outcomes. To systematically address these aims, I examined how well observers recognize emotions, presented via videos, and how emotion recognition accuracy and perceptual strategies, assessed via eye-tracking, vary under changing availability and reliability of the visual and auditory information. The research presented in my thesis shows that audiovisual integration and compensation abilities remain intact with age, despite a general decline in recognition accuracy. Compensation for degraded audio is possible by relying more on visual signals, but not vice versa. Older observers adapt their perceptual strategies in a different, perhaps less efficient, manner than younger observers. Importantly, I demonstrate that it is crucial to use additional measurements besides recognition accuracy in order to understand audiovisual integration mechanisms. Measurements such as eye-tracking allow examining whether the reliance on visual and auditory signals alters with age and (simulated) sensory impairments, even when lacking a change in accuracy

    Spectral resolution and speech understanding in children and young adults with bimodal devices

    Get PDF
    Tests of spectral modulation detection and speech understanding were administered to children and young adults with hearing loss who use bimodal devices (one cochlear implant and one hearing aid at the non-implanted ear). Spectral modulation detection performance increases with participant’s age, and better speech recognition scores are associated with better audibility (SII or PTA)

    Predicting Speech Recognition using the Speech Intelligibility Index (SII) for Cochlear Implant Users and Listeners with Normal Hearing

    Get PDF
    Although the AzBio test is well validated, has effective standardization data available, and is highly recommended for Cochlear Implant (CI) evaluation, no attempt has been made to derive a Frequency Importance Function (FIF) for its stimuli. In the first phase of this dissertation, we derived FIFs for the AzBio sentence lists using listeners with normal hearing. Traditional procedures described in studies by Studebaker and Sherbecoe (1991) were applied for this purpose. Fifteen participants with normal hearing listened to a large number of AzBio sentences that were high- and low-pass filtered under speech-spectrum shaped noise at various signal-to-noise ratios. Frequency weights for the AzBio sentences were greatest in the 1.5 to 2 kHz frequency regions as is the case with other speech materials. A cross-procedure comparison was conducted between the traditional procedure (Studebaker and Sherbecoe, 1991) and the nonlinear optimization procedure (Kates, 2013). Consecutive data analyses provided speech recognition scores for the AzBio sentences in relation to the Speech Intelligibility Index (SII). Our findings provided empirically derived FIFs for the AzBio test that can be used for future studies. It is anticipated that the accuracy of predicting SIIs for CI patients will be improved when using these derived FIFs for the AzBio test. In the second study, the SIIfor CIrecipients was calculated to investigate whether the SII is an effective tool for predicting speech perception performance in a CI population. A total of fifteen CI adults participated. The FIFs obtained from the first study were used to compute the SII in these CI listeners. The obtained SIIs were compared with predicted SIIs using a transfer function curve derived from the first study. Due to the considerably poor hearing and large individual variability in performance in the CI population, the SII failed to predict speech perception performance for this CI group. Other predictive factors that have been associated with speech perception performance were also examined using a multiple regression analysis. Gap detection thresholds and duration of deafness were found to be significant predictive factors. These predictor factors and SIIs are discussed in relation to speech perception performance in CI users
    • …
    corecore