13,699 research outputs found

    Adaptive benefit of cross-modal plasticity following cochlear implantation in deaf adults

    Get PDF
    It has been suggested that visual language is maladaptive for hearing restoration with a cochlear implant (CI) due to cross-modal recruitment of auditory brain regions. Rehabilitative guidelines therefore discourage the use of visual language. However, neuroscientific understanding of cross-modal plasticity following cochlear implantation has been restricted due to incompatibility between established neuroimaging techniques and the surgically implanted electronic and magnetic components of the CI. As a solution to this problem, here we used functional near-infrared spectroscopy (fNIRS), a noninvasive optical neuroimaging method that is fully compatible with a CI and safe for repeated testing. The aim of this study was to examine cross-modal activation of auditory brain regions by visual speech from before to after implantation and its relation to CI success. Using fNIRS, we examined activation of superior temporal cortex to visual speech in the same profoundly deaf adults both before and 6 mo after implantation. Patients’ ability to understand auditory speech with their CI was also measured following 6 mo of CI use. Contrary to existing theory, the results demonstrate that increased cross-modal activation of auditory brain regions by visual speech from before to after implantation is associated with better speech understanding with a CI. Furthermore, activation of auditory cortex by visual and auditory speech developed in synchrony after implantation. Together these findings suggest that cross-modal plasticity by visual speech does not exert previously assumed maladaptive effects on CI success, but instead provides adaptive benefits to the restoration of hearing after implantation through an audiovisual mechanism

    Resonant Neural Dynamics of Speech Perception

    Full text link
    What is the neural representation of a speech code as it evolves in time? How do listeners integrate temporally distributed phonemic information across hundreds of milliseconds, even backwards in time, into coherent representations of syllables and words? What sorts of brain mechanisms encode the correct temporal order, despite such backwards effects, during speech perception? How does the brain extract rate-invariant properties of variable-rate speech? This article describes an emerging neural model that suggests answers to these questions, while quantitatively simulating challenging data about audition, speech and word recognition. This model includes bottom-up filtering, horizontal competitive, and top-down attentional interactions between a working memory for short-term storage of phonetic items and a list categorization network for grouping sequences of items. The conscious speech and word recognition code is suggested to be a resonant wave of activation across such a network, and a percept of silence is proposed to be a temporal discontinuity in the rate with which such a resonant wave evolves. Properties of these resonant waves can be traced to the brain mechanisms whereby auditory, speech, and language representations are learned in a stable way through time. Because resonances are proposed to control stable learning, the model is called an Adaptive Resonance Theory, or ART, model.Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-01-1-0624)

    The Influence of Social Priming on Speech Perception

    Get PDF
    Speech perception relies on auditory, visual, and motor cues and has been historically difficult to model, partially due to this multimodality. One of the current models is the Fuzzy Logic Model of Perception (FLMP), which suggests that if one of these types of speech mode is altered, the perception of that speech signal should be altered in a quantifiable and predictable way. The current study uses social priming to activate the schema of blindness in order to reduce reliance of visual cues of syllables with a visually identical pair. According to the FLMP, by lowering reliance on visual cues, visual confusion should also be reduced, allowing the visually confusable syllables to be identified more quickly. Although no main effect of priming was discovered, some individual syllables showed the expected facilitation while others showed inhibition. These results suggest that there is an effect of social priming on speech perception, despite the opposing reactions between syllables. Further research should use a similar kind of social priming to determine which syllables have more acoustically salient features and which have more visually salient features

    Cortical cross-modal plasticity following deafness measured using functional near-infrared spectroscopy

    Get PDF
    Evidence from functional neuroimaging studies suggests that the auditory cortex can become more responsive to visual and somatosensory stimulation following deafness, and that this occurs predominately in the right hemisphere. Extensive cross-modal plasticity in prospective cochlear implant recipients is correlated with poor speech outcomes following implantation, highlighting the potential impact of central auditory plasticity on subsequent aural rehabilitation. Conversely, the effects of hearing restoration with a cochlear implant on cortical plasticity are less well understood, since the use of most neuroimaging techniques in CI recipients is either unsafe or problematic due to the electromagnetic artefacts generated by CI stimulation. Additionally, techniques such as functional magnetic resonance imaging (fMRI) are confounded by acoustic noise produced by the scanner that will be perceived more by hearing than by deaf individuals. Subsequently it is conceivable that auditory responses to acoustic noise produced by the MR scanner may mask auditory cortical responses to non-auditory stimulation, and render inter-group comparisons less significant. Uniquely, functional near-infrared spectroscopy (fNIRS) is a silent neuroimaging technique that is non-invasive and completely unaffected by the presence of a CI. Here, we used fNIRS to study temporal-lobe responses to auditory, visual and somatosensory stimuli in thirty profoundly-deaf participants and thirty normally-hearing controls. Compared with silence, acoustic noise stimuli elicited a significant group fNIRS response in the temporal region of normally-hearing individuals, which was not seen in profoundly-deaf participants. Visual motion elicited a larger group response within the right temporal lobe of profoundly-deaf participants, compared with normally-hearing controls. However, bilateral temporal lobe fNIRS activation to somatosensory stimulation was comparable in both groups. Using fNIRS these results confirm that auditory deprivation is associated with cross-modal plasticity of visual inputs to auditory cortex. Although we found no evidence for plasticity of somatosensory inputs, it is possible that our recordings may have included activation of somatosensory cortex that masked any group differences in auditory cortical responses due to the limited spatial resolution associated with fNIRS

    Verbal Learning and Memory After Cochlear Implantation in Postlingually Deaf Adults: Some New Findings with the CVLT-II

    Get PDF
    OBJECTIVES: Despite the importance of verbal learning and memory in speech and language processing, this domain of cognitive functioning has been virtually ignored in clinical studies of hearing loss and cochlear implants in both adults and children. In this article, we report the results of two studies that used a newly developed visually based version of the California Verbal Learning Test-Second Edition (CVLT-II), a well-known normed neuropsychological measure of verbal learning and memory. DESIGN: The first study established the validity and feasibility of a computer-controlled visual version of the CVLT-II, which eliminates the effects of audibility of spoken stimuli, in groups of young normal-hearing and older normal-hearing (ONH) adults. A second study was then carried out using the visual CVLT-II format with a group of older postlingually deaf experienced cochlear implant (ECI) users (N = 25) and a group of ONH controls (N = 25) who were matched to ECI users for age, socioeconomic status, and nonverbal IQ. In addition to the visual CVLT-II, subjects provided data on demographics, hearing history, nonverbal IQ, reading fluency, vocabulary, and short-term memory span for visually presented digits. ECI participants were also tested for speech recognition in quiet. RESULTS: The ECI and ONH groups did not differ on most measures of verbal learning and memory obtained with the visual CVLT-II, but deficits were identified in ECI participants that were related to recency recall, the buildup of proactive interference, and retrieval-induced forgetting. Within the ECI group, nonverbal fluid IQ, reading fluency, and resistance to the buildup of proactive interference from the CVLT-II consistently predicted better speech recognition outcomes. CONCLUSIONS: Results from this study suggest that several underlying foundational neurocognitive abilities are related to core speech perception outcomes after implantation in older adults. Implications of these findings for explaining individual differences and variability and predicting speech recognition outcomes after implantation are discussed

    Auditory neuroscience: Development, transduction and integration

    Get PDF
    Hearing underlies our ability to locate sound sources in the environment, our appreciation of music, and our ability to communicate. Participants in the National Academy of Sciences colloquium on Auditory Neuroscience: Development, Transduction, and Integration presented research results bearing on four key issues in auditory research. How does the complex inner ear develop? How does the cochlea transduce sounds into electrical signals? How does the brain's ability to compute the location of a sound source develop? How does the forebrain analyze complex sounds, particularly species-specific communications? This article provides an introduction to the papers stemming from the meeting

    Development of audiovisual comprehension skills in prelingually deaf children with cochlear implants

    Get PDF
    Objective: The present study investigated the development of audiovisual comprehension skills in prelingually deaf children who received cochlear implants. Design: We analyzed results obtained with the Common Phrases (Robbins et al., 1995) test of sentence comprehension from 80 prelingually deaf children with cochlear implants who were enrolled in a longitudinal study, from pre-implantation to 5 years after implantation. Results: The results revealed that prelingually deaf children with cochlear implants performed better under audiovisual (AV) presentation compared with auditory-alone (A-alone) or visual-alone (V-alone) conditions. AV sentence comprehension skills were found to be strongly correlated with several clinical outcome measures of speech perception, speech intelligibility, and language. Finally, pre-implantation V-alone performance on the Common Phrases test was strongly correlated with 3-year postimplantation performance on clinical outcome measures of speech perception, speech intelligibility, and language skills. Conclusions: The results suggest that lipreading skills and AV speech perception reflect a common source of variance associated with the development of phonological processing skills that is shared among a wide range of speech and language outcome measures

    The Resonant Dynamics of Speech Perception: Interword Integration and Duration-Dependent Backward Effects

    Full text link
    How do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? During fluent speech perception, variations in the durations of speech sounds and silent pauses can produce different pereeived groupings. For exarnple, increasing the silence interval between the words "gray chip" may result in the percept "great chip", whereas increasing the duration of fricative noise in "chip" may alter the percept to "great ship" (Repp et al., 1978). The ARTWORD neural model quantitatively simulates such context-sensitive speech data. In AHTWORD, sequential activation and storage of phonemic items in working memory provides bottom-up input to unitized representations, or list chunks, that group together sequences of items of variable length. The list chunks compete with each other as they dynamically integrate this bottom-up information. The winning groupings feed back to provide top-down supportto their phonemic items. Feedback establishes a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept. Because the resonance evolves more slowly than wotking memory activation, it can be influenced by information presented after relatively long intervening silence intervals. The same phonemic input can hereby yield different groupings depending on its arrival time. Processes of resonant transfer and competitive teaming help determine which groupings win the competition. Habituating levels of neurotransmitter along the pathways that sustain the resonant feedback lead to a resonant collapsee that permits the formation of subsequent. resonances.Air Force Office of Scientific Research (F49620-92-J-0225); Defense Advanced Research projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-92-J-1309, NOOO14-95-1-0657

    Aging, Cognitive Decline and Hearing Loss: Effects of Auditory Rehabilitation and Training with Hearing Aids and Cochlear Implants on Cognitive Function and Depression among Older Adults

    Get PDF
    A growing interest in cognitive effects associated with speech and hearing processes is spreading throughout the scientific community essentially guided by evidence that central and peripheral hearing loss is associated with cognitive decline. For the present research, 125 participants older than 65 years of age (105 with hearing impairment and 20 with normal hearing) were enrolled, divided into 6 groups according to their degree of hearing loss and assessed to determine the effects of the treatment applied. Patients in our research program routinely undergo an extensive audiological and cognitive evaluation protocol providing results from the Digit Span test, Stroop color-word test, Montreal Cognitive Assessment and Geriatric Depression Scale, before and after rehabilitation. Data analysis was performed for a cross-sectional and longitudinal study of the outcomes for the different treatment groups. Each group demonstrated improvement after auditory rehabilitation or training on shortand long-term memory tasks, level of depression and cognitive status scores. Auditory rehabilitation by cochlear implants or hearing aids is effective also among older adults (median age of 74 years) with different degrees of hearing loss, and enables positive improvements in terms of social isolation, depression and cognitive performance
    • …
    corecore