13,793 research outputs found

    Imaging speech production using fMRI

    Get PDF
    Human speech is a well-learned, sensorimotor, and ecological behavior ideal for the study of neural processes and brain-behavior relations. With the advent of modern neuroimaging techniques such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), the potential for investigating neural mechanisms of speech motor control, speech motor disorders, and speech motor development has increased. However, a practical issue has limited the application of fMRI to issues in spoken language production and other related behaviors (singing, swallowing). Producing these behaviors during volume acquisition introduces motion-induced signal changes that confound the activation signals of interest. A number of approaches, ranging from signal processing to using silent or covert speech, have attempted to remove or prevent the effects of motioninduced artefact. However, these approaches are flawed for a variety of reasons. An alternative approach, that has only recently been applied to study single-word production, uses pauses in volume acquisition during the production of natural speech motion. Here we present some representative data illustrating the problems associated with motion artefacts and some qualitative results acquired from subjects producing short sentences and orofacial nonspeech movements in the scanner. Using pauses or silent intervals in volume acquisition and block designs, results from individual subjects result in robust activation without motion-induced signal artefact. This approach is an efficient method for studying the neural basis of spoken language production and the effects of speech and language disorders using fMRI

    Reliability of single-subject neural activation patterns in speech production tasks

    Full text link
    Traditional group fMRI (functional magnetic resonance imaging) analyses are not designed to detect individual differences that may be crucial to better understanding speech disorders. Single-subject research could therefore provide a richer characterization of the neural substrates of speech production in development and disease. Before this line of research can be tackled, however, it is necessary to evaluate whether healthy individuals exhibit reproducible brain activation across multiple sessions during speech production tasks. In the present study, we evaluated the reliability and discriminability of cortical functional magnetic resonance imaging data from twenty neurotypical subjects who participated in two experiments involving reading aloud mono- or bisyllabic speech stimuli. Using traditional methods like the Dice and intraclass correlation coefficients, we found that most individuals displayed moderate to high reliability, with exceptions likely due to increased head motion in the scanner. Further, this level of reliability for speech production was not directly correlated with reliable patterns in the underlying average blood oxygenation level dependent signal across the brain. Finally, we found that a novel machine-learning subject classifier could identify these individuals by their speech activation patterns with 97% accuracy from among a dataset of seventy-five subjects. These results suggest that single-subject speech research would yield valid results and that investigations into the reliability of speech activation in people with speech disorders are warranted.Accepted manuscrip

    The neural correlates of speech motor sequence learning

    Full text link
    Speech is perhaps the most sophisticated example of a species-wide movement capability in the animal kingdom, requiring split-second sequencing of approximately 100 muscles in the respiratory, laryngeal, and oral movement systems. Despite the unique role speech plays in human interaction and the debilitating impact of its disruption, little is known about the neural mechanisms underlying speech motor learning. Here, we studied the behavioral and neural correlates of learning new speech motor sequences. Participants repeatedly produced novel, meaningless syllables comprising illegal consonant clusters (e.g., GVAZF) over 2 days of practice. Following practice, participants produced the sequences with fewer errors and shorter durations, indicative of motor learning. Using fMRI, we compared brain activity during production of the learned illegal sequences and novel illegal sequences. Greater activity was noted during production of novel sequences in brain regions linked to non-speech motor sequence learning, including the BG and pre-SMA. Activity during novel sequence production was also greater in brain regions associated with learning and maintaining speech motor programs, including lateral premotor cortex, frontal operculum, and posterior superior temporal cortex. Measures of learning success correlated positively with activity in left frontal operculum and white matter integrity under left posterior superior temporal sulcus. These findings indicate speech motor sequence learning relies not only on brain areas involved generally in motor sequencing learning but also those associated with feedback-based speech motor learning. Furthermore, learning success is modulated by the integrity of structural connectivity between these motor and sensory brain regions.R01 DC007683 - NIDCD NIH HHS; R01DC007683 - NIDCD NIH HH

    Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production

    Full text link
    This paper describes a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic, and neuroimaging data concerning the control of speech movements. The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and somatosensory cortical areas. Computer simulations of the model verify its ability to account for compensation to lip and jaw perturbations during speech. Specific anatomical locations of the model's components are estimated, and these estimates are used to simulate fMRI experiments of simple syllable production with and without jaw perturbations.National Institute on Deafness and Other Communication Disorders (R01 DC02852, RO1 DC01925

    Structural correlates of semantic and phonemic fluency ability in first and second languages

    Get PDF
    Category and letter fluency tasks are commonly used clinically to investigate the semantic and phonological processes central to speech production, but the neural correlates of these processes are difficult to establish with functional neuroimaging because of the relatively unconstrained nature of the tasks. This study investigated whether differential performance on semantic (category) and phonemic (letter) fluency in neurologically normal participants was reflected in regional gray matter density. The participants were 59 highly proficient speakers of 2 languages. Our findings corroborate the importance of the left inferior temporal cortex in semantic relative to phonemic fluency and show this effect to be the same in a first language (L1) and second language (L2). Additionally, we show that the pre-supplementary motor area (pre-SMA) and head of caudate bilaterally are associated with phonemic more than semantic fluency, and this effect is stronger for L2 than L1 in the caudate nuclei. To further validate these structural results, we reanalyzed previously reported functional data and found that pre-SMA and left caudate activation was higher for phonemic than semantic fluency. On the basis of our findings, we also predict that lesions to the pre-SMA and caudate nuclei may have a greater impact on phonemic than semantic fluency, particularly in L2 speakers

    Crossed Aphasia in a Patient with Anaplastic Astrocytoma of the Non-Dominant Hemisphere

    Get PDF
    Aphasia describes a spectrum of speech impairments due to damage in the language centers of the brain. Insult to the inferior frontal gyrus of the dominant cerebral hemisphere results in Broca\u27s aphasia - the inability to produce fluent speech. The left cerebral hemisphere has historically been considered the dominant side, a characteristic long presumed to be related to a person\u27s handedness . However, recent studies utilizing fMRI have shown that right hemispheric dominance occurs more frequently than previously proposed and despite a person\u27s handedness. Here we present a case of a right-handed patient with Broca\u27s aphasia caused by a right-sided brain tumor. This is significant not only because the occurrence of aphasia in right-handed-individuals with right hemispheric brain damage (so-called crossed aphasia ) is unusual but also because such findings support dissociation between hemispheric linguistic dominance and handedness. © 2017, EduRad. All rights reserved

    Auditory feedback control mechanisms do not contribute to cortical hyperactivity within the voice production network in adductor spasmodic dysphonia

    Full text link
    Adductor spasmodic dysphonia (ADSD), the most common form of spasmodic dysphonia, is a debilitating voice disorder characterized by hyperactivity and muscle spasms in the vocal folds during speech. Prior neuroimaging studies have noted excessive brain activity during speech in ADSD participants compared to controls. Speech involves an auditory feedback control mechanism that generates motor commands aimed at eliminating disparities between desired and actual auditory signals. Thus, excessive neural activity in ADSD during speech may reflect, at least in part, increased engagement of the auditory feedback control mechanism as it attempts to correct vocal production errors detected through audition. To test this possibility, functional magnetic resonance imaging was used to identify differences between ADSD participants and age-matched controls in (i) brain activity when producing speech under different auditory feedback conditions, and (ii) resting state functional connectivity within the cortical network responsible for vocalization. The ADSD group had significantly higher activity than the control group during speech (compared to a silent baseline task) in three left-hemisphere cortical regions: ventral Rolandic (sensorimotor) cortex, anterior planum temporale, and posterior superior temporal gyrus/planum temporale. This was true for speech while auditory feedback was masked with noise as well as for speech with normal auditory feedback, indicating that the excess activity was not the result of auditory feedback control mechanisms attempting to correct for perceived voicing errors in ADSD. Furthermore, the ADSD group had significantly higher resting state functional connectivity between sensorimotor and auditory cortical regions within the left hemisphere as well as between the left and right hemispheres, consistent with the view that excessive motor activity frequently co-occurs with increased auditory cortical activity in individuals with ADSD.First author draf

    Altered resting-state network connectivity in stroke patients with and without apraxia of speech

    Get PDF
    Motor speech disorders, including apraxia of speech (AOS), account for over 50% of the communication disorders following stroke. Given its prevalence and impact, and the need to understand its neural mechanisms, we used resting state functional MRI to examine functional connectivity within a network of regions previously hypothesized as being associated with AOS (bilateral anterior insula (aINS), inferior frontal gyrus (IFG), and ventral premotor cortex (PM)) in a group of 32 left hemisphere stroke patients and 18 healthy, age-matched controls. Two expert clinicians rated severity of AOS, dysarthria and nonverbal oral apraxia of the patients. Fifteen individuals were categorized as AOS and 17 were AOS-absent. Comparison of connectivity in patients with and without AOS demonstrated that AOS patients had reduced connectivity between bilateral PM, and this reduction correlated with the severity of AOS impairment. In addition, AOS patients had negative connectivity between the left PM and right aINS and this effect decreased with increasing severity of non-verbal oral apraxia. These results highlight left PM involvement in AOS, begin to differentiate its neural mechanisms from those of other motor impairments following stroke, and help inform us of the neural mechanisms driving differences in speech motor planning and programming impairment following stroke
    corecore