21,681 research outputs found

    Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors

    Get PDF
    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production

    A cortical network processes auditory error signals during human speech production to maintain fluency

    Get PDF
    Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency

    Articulating: the neural mechanisms of speech production

    Full text link
    Speech production is a highly complex sensorimotor task involving tightly coordinated processing across large expanses of the cerebral cortex. Historically, the study of the neural underpinnings of speech suffered from the lack of an animal model. The development of non-invasive structural and functional neuroimaging techniques in the late 20th century has dramatically improved our understanding of the speech network. Techniques for measuring regional cerebral blood flow have illuminated the neural regions involved in various aspects of speech, including feedforward and feedback control mechanisms. In parallel, we have designed, experimentally tested, and refined a neural network model detailing the neural computations performed by specific neuroanatomical regions during speech. Computer simulations of the model account for a wide range of experimental findings, including data on articulatory kinematics and brain activity during normal and perturbed speech. Furthermore, the model is being used to investigate a wide range of communication disorders.R01 DC002852 - NIDCD NIH HHS; R01 DC007683 - NIDCD NIH HHS; R01 DC016270 - NIDCD NIH HHSAccepted manuscrip

    Involvement of the cortico-basal ganglia-thalamocortical loop in developmental stuttering

    Get PDF
    Stuttering is a complex neurodevelopmental disorder that has to date eluded a clear explication of its pathophysiological bases. In this review, we utilize the Directions Into Velocities of Articulators (DIVA) neurocomputational modeling framework to mechanistically interpret relevant findings from the behavioral and neurological literatures on stuttering. Within this theoretical framework, we propose that the primary impairment underlying stuttering behavior is malfunction in the cortico-basal ganglia-thalamocortical (hereafter, cortico-BG) loop that is responsible for initiating speech motor programs. This theoretical perspective predicts three possible loci of impaired neural processing within the cortico-BG loop that could lead to stuttering behaviors: impairment within the basal ganglia proper; impairment of axonal projections between cerebral cortex, basal ganglia, and thalamus; and impairment in cortical processing. These theoretical perspectives are presented in detail, followed by a review of empirical data that make reference to these three possibilities. We also highlight any differences that are present in the literature based on examining adults versus children, which give important insights into potential core deficits associated with stuttering versus compensatory changes that occur in the brain as a result of having stuttered for many years in the case of adults who stutter. We conclude with outstanding questions in the field and promising areas for future studies that have the potential to further advance mechanistic understanding of neural deficits underlying persistent developmental stuttering.R01 DC007683 - NIDCD NIH HHS; R01 DC011277 - NIDCD NIH HHSPublished versio

    Dynamics of Vocalization-Induced Modulation of Auditory Cortical Activity at Mid-utterance

    Get PDF
    Background: Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance. Methodology/Principal findings: Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise. Conclusion/Significance: The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds

    Conflict monitoring in speech processing: an fMRI study of error detection in speech production and perception

    Get PDF
    To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated whether internal verbal monitoring takes place through the speech perception system, as proposed by perception-based theories of speech monitoring, or whether mechanisms independent of perception are applied, as proposed by production-based theories of speech monitoring. With the use of fMRI during a tongue twister task we observed that error detection in internal speech during noise-masked overt speech production and error detection in speech perception both recruit the same neural network, which includes pre-supplementary motor area (pre-SMA), dorsal anterior cingulate cortex (dACC), anterior insula (AI), and inferior frontal gyrus (IFG). Although production and perception recruit similar areas, as proposed by perception-based accounts, we did not find activation in superior temporal areas (which are typically associated with speech perception) during internal speech monitoring in speech production as hypothesized by these accounts. On the contrary, results are highly compatible with a domain general approach to speech monitoring, by which internal speech monitoring takes place through detection of conflict between response options, which is subsequently resolved by a domain general executive center (e.g., the ACC)

    Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production

    Full text link
    This paper describes a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic, and neuroimaging data concerning the control of speech movements. The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and somatosensory cortical areas. Computer simulations of the model verify its ability to account for compensation to lip and jaw perturbations during speech. Specific anatomical locations of the model's components are estimated, and these estimates are used to simulate fMRI experiments of simple syllable production with and without jaw perturbations.National Institute on Deafness and Other Communication Disorders (R01 DC02852, RO1 DC01925

    Behavioral, computational, and neuroimaging studies of acquired apraxia of speech

    Get PDF
    A critical examination of speech motor control depends on an in-depth understanding of network connectivity associated with Brodmann areas 44 and 45 and surrounding cortices. Damage to these areas has been associated with two conditions-the speech motor programming disorder apraxia of speech (AOS) and the linguistic/grammatical disorder of Broca's aphasia. Here we focus on AOS, which is most commonly associated with damage to posterior Broca's area (BA) and adjacent cortex. We provide an overview of our own studies into the nature of AOS, including behavioral and neuroimaging methods, to explore components of the speech motor network that are associated with normal and disordered speech motor programming in AOS. Behavioral, neuroimaging, and computational modeling studies are indicating that AOS is associated with impairment in learning feedforward models and/or implementing feedback mechanisms and with the functional contribution of BA6. While functional connectivity methods are not yet routinely applied to the study of AOS, we highlight the need for focusing on the functional impact of localized lesions throughout the speech network, as well as larger scale comparative studies to distinguish the unique behavioral and neurological signature of AOS. By coupling these methods with neural network models, we have a powerful set of tools to improve our understanding of the neural mechanisms that underlie AOS, and speech production generally

    Training of Working Memory Impacts Neural Processing of Vocal Pitch Regulation

    Get PDF
    Working memory training can improve the performance of tasks that were not trained. Whether auditory-motor integration for voice control can benefit from working memory training, however, remains unclear. The present event-related potential (ERP) study examined the impact of working memory training on the auditory-motor processing of vocal pitch. Trained participants underwent adaptive working memory training using a digit span backwards paradigm, while control participants did not receive any training. Before and after training, both trained and control participants were exposed to frequency-altered auditory feedback while producing vocalizations. After training, trained participants exhibited significantly decreased N1 amplitudes and increased P2 amplitudes in response to pitch errors in voice auditory feedback. In addition, there was a significant positive correlation between the degree of improvement in working memory capacity and the post-pre difference in P2 amplitudes. Training-related changes in the vocal compensation, however, were not observed. There was no systematic change in either vocal or cortical responses for control participants. These findings provide evidence that working memory training impacts the cortical processing of feedback errors in vocal pitch regulation. This enhanced cortical processing may be the result of increased neural efficiency in the detection of pitch errors between the intended and actual feedback

    Auditory-motor adaptation is reduced in adults who stutter but not in children who stutter

    Full text link
    Previous studies have shown that adults who stutter produce smaller corrective motor responses to compensate for unexpected auditory perturbations in comparison to adults who do not stutter, suggesting that stuttering may be associated with deficits in integration of auditory feedback for online speech monitoring. In this study, we examined whether stuttering is also associated with deficiencies in integrating and using discrepancies between expect ed and received auditory feedback to adaptively update motor programs for accurate speech production. Using a sensorimotor adaptation paradigm, we measured adaptive speech responses to auditory formant frequency perturbations in adults and children who stutter and their matched nonstuttering controls. We found that the magnitude of the speech adaptive response for children who stutter did not differ from that of fluent children. However, the adaptation magnitude of adults who stutter in response to formant perturbation was significantly smaller than the adaptation magnitude of adults who do not stutter. Together these results indicate that stuttering is associated with deficits in integrating discrepancies between predicted and received auditory feedback to calibrate the speech production system in adults but not children. This auditory-motor integration deficit thus appears to be a compensatory effect that develops over years of stuttering
    corecore