17 research outputs found

    Dynamic networks differentiate the language ability of children with cochlear implants

    Get PDF
    Background: Cochlear implantation (CI) in prelingually deafened children has been shown to be an effective intervention for developing language and reading skill. However, there is a substantial proportion of the children receiving CI who struggle with language and reading. The current study–one of the first to implement electrical source imaging in CI population was designed to identify the neural underpinnings in two groups of CI children with good and poor language and reading skill. Methods: Data using high density electroencephalography (EEG) under a resting state condition was obtained from 75 children, 50 with CIs having good (HL) or poor language skills (LL) and 25 normal hearing (NH) children. We identified coherent sources using dynamic imaging of coherent sources (DICS) and their effective connectivity computing time-frequency causality estimation based on temporal partial directed coherence (TPDC) in the two CI groups compared to a cohort of age and gender matched NH children. Findings: Sources with higher coherence amplitude were observed in three frequency bands (alpha, beta and gamma) for the CI groups when compared to normal hearing children. The two groups of CI children with good (HL) and poor (LL) language ability exhibited not only different cortical and subcortical source profiles but also distinct effective connectivity between them. Additionally, a support vector machine (SVM) algorithm using these sources and their connectivity patterns for each CI group across the three frequency bands was able to predict the language and reading scores with high accuracy. Interpretation: Increased coherence in the CI groups suggest overall that the oscillatory activity in some brain areas become more strongly coupled compared to the NH group. Moreover, the different sources and their connectivity patterns and their association to language and reading skill in both groups, suggest a compensatory adaptation that either facilitated or impeded language and reading development. The neural differences in the two groups of CI children may reflect potential biomarkers for predicting outcome success in CI children

    Intervention and Outcomes of Children in Different Types of Listening and Spoken Language Programs

    Get PDF
    This study explores the impact of the type and dosage of listening and spoken language (LSL) services on speech and language outcomes in children with cochlear implants or hearing aids in two LSL programs. Identical demographic variables were collected across the two programs for use in the statistical analyses. Speech and language outcomes were examined at ages 3 and 5 using standardized test measures. At age 3, significant differences in LSL outcomes existed between programs for children using cochlear implants but not for children using binaural hearing aids. However, at age five, outcomes were similar between the different LSL programs for children with hearing aids and cochlear implants. Total hours of LSL services do not serve as a predictor of LSL outcomes at five years of age. However, early identification of hearing loss, early amplification, and early enrollment in a LSL program were highly influential factors affecting LSL outcomes at three and five years of age. Non-verbal IQ and maternal education levels also influence LSL outcomes. Children with earlier access to hearing technology and LSL intervention may need fewer hours of LSL services to achieve age-appropriate LSL outcomes. Overall, both of these LSL programs supported age-appropriate speech and language outcomes by age 5

    Optimizing The Benefit of Sound Processors Coupled to Personal FM Systems

    No full text

    Audiovisual integration in children with cochlear implants revealed through EEG and fNIRS

    No full text
    Sensory deprivation can offset the balance of audio versus visual information in multimodal processing. Such a phenomenon could persist for children born deaf, even after they receive cochlear implants (CIs), and could potentially explain why one modality is given priority over the other. Here, we recorded cortical responses to a single speaker uttering two syllables, presented in audio-only (A), visual-only (V), and audio-visual (AV) modes. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) were successively recorded in seventy-five school-aged children. Twenty-five were children with normal hearing (NH) and fifty wore CIs, among whom 26 had relatively high language abilities (HL) comparable to those of NH children, while 24 others had low language abilities (LL). In EEG data, visual-evoked potentials were captured in occipital regions, in response to V and AV stimuli, and they were accentuated in the HL group compared to the LL group (the NH group being intermediate). Close to the vertex, auditory-evoked potentials were captured in response to A and AV stimuli and reflected a differential treatment of the two syllables but only in the NH group. None of the EEG metrics revealed any interaction between group and modality. In fNIRS data, each modality induced a corresponding activity in visual or auditory regions, but no group difference was observed in A, V, or AV stimulation. The present study did not reveal any sign of abnormal AV integration in children with CI. An efficient multimodal integrative network (at least for rudimentary speech materials) is clearly not a sufficient condition to exhibit good language and literacy
    corecore