12 research outputs found

    Automatic imitation of human and computer-generated vocal stimuli

    Get PDF
    Observing someone perform an action automatically activates neural substrates associated with executing that action. This covert response, or automatic imitation, is measured behaviourally using the stimulus–response compatibility (SRC) task. In an SRC task, participants are presented with compatible and incompatible response–distractor pairings (e.g., an instruction to say “ba” paired with an audio recording of “da” as an example of an incompatible trial). Automatic imitation is measured as the difference in response times (RT) or accuracy between incompatible and compatible trials. Larger automatic imitation effects have been interpreted as a larger covert imitation response. Past results suggest that an action’s biological status affects automatic imitation: Human-produced manual actions show enhanced automatic imitation effects compared with computer-generated actions. Per the integrated theory for language comprehension and production, action observation triggers a simulation process to recognize and interpret observed speech actions involving covert imitation. Human-generated actions are predicted to result in increased automatic imitation because the simulation process is predicted to engage more for actions produced by a speaker who is more similar to the listener. We conducted an online SRC task that presented participants with human and computer-generated speech stimuli to test this prediction. Participants responded faster to compatible than incompatible trials, showing an overall automatic imitation effect. Yet the human-generated and computer-generated vocal stimuli evoked similar automatic imitation effects. These results suggest that computer-generated speech stimuli evoke the same covert imitative response as human stimuli, thus rejecting predictions from the integrated theory of language comprehension and production

    Decline of auditory-motor speech processing in older adults with hearing loss

    Get PDF
    Older adults often experience difficulties in understanding speech, partly because of age-related hearing loss. In young adults, activity of the left articulatory motor cortex is enhanced and it interacts with the auditory cortex via the left-hemispheric dorsal stream during speech processing. Little is known about the effect of ageing and age-related hearing loss on this auditory-motor interaction and speech processing in the articulatory motor cortex. It has been proposed that up-regulation of the motor system during speech processing could compensate for hearing loss and auditory processing deficits in older adults. Alternatively, age-related auditory deficits could reduce and distort the input from the auditory cortex to the articulatory motor cortex, suppressing recruitment of the motor system during listening to speech. The aim of the present study was to investigate the effects of ageing and age-related hearing loss on the excitability of the tongue motor cortex during listening to spoken sentences using transcranial magnetic stimulation and electromyography. Our results show that the excitability of the tongue motor cortex was facilitated during listening to speech in young and older adults with normal hearing. This facilitation was significantly reduced in older adults with hearing loss. These findings suggest a decline of auditory-motor processing of speech in adults with age-related hearing loss

    Transcranial magnetic stimulation and motor evoked potentials in speech perception research

    Get PDF
    Transcranial magnetic stimulation (TMS) has been employed to manipulate brain activity and to establish cortical excitability by eliciting motor evoked potentials (MEPs) in speech processing research. We will discuss the history, methodological underpinnings, key contributions, and future directions for studying speech processing using TMS and by eliciting MEPs. Furthermore, we will discuss specific challenges that are encountered when examining speech processing using TMS or by measuring MEPs. We suggest that future research may benefit from using TMS in conjunction with neuroimaging methods such as functional Magnetic Resonance Imaging or electroencephalography, and from the development of new stimulation protocols addressing cortico-cortical inhibition/facilitation and interhemispheric connectivity during speech processing

    Cerebral Hemodynamics in Speech-Related Cortical Areas: Articulation Learning Involves the Inferior Frontal Gyrus, Ventral Sensory-Motor Cortex, and Parietal-Temporal Sylvian Area

    Get PDF
    Although motor training programs have been applied to childhood apraxia of speech (AOS), the neural mechanisms of articulation learning are not well understood. To this aim, we recorded cerebral hemodynamic activity in the left hemisphere of healthy subjects (n = 15) during articulation learning. We used near-infrared spectroscopy (NIRS) while articulated voices were recorded and analyzed using spectrograms. The study consisted of two experimental sessions (modified and control sessions) in which participants were asked to repeat the articulation of the syllables “i-chi-ni” with and without an occlusal splint. This splint was used to increase the vertical dimension of occlusion to mimic conditions of articulation disorder. There were more articulation errors in the modified session, but number of errors were decreased in the final half of the modified session; this suggests that articulation learning took place. The hemodynamic NIRS data revealed significant activation during articulation in the frontal, parietal, and temporal cortices. These areas are involved in phonological processing and articulation planning and execution, and included the following areas: (i) the ventral sensory-motor cortex (vSMC), including the Rolandic operculum, precentral gyrus, and postcentral gyrus, (ii) the dorsal sensory-motor cortex, including the precentral and postcentral gyri, (iii) the opercular part of the inferior frontal gyrus (IFGoperc), (iv) the temporal cortex, including the superior temporal gyrus, and (v) the inferior parietal lobe (IPL), including the supramarginal and angular gyri. The posterior Sylvian fissure at the parietal–temporal boundary (area Spt) was selectively activated in the modified session. Furthermore, hemodynamic activity in the IFGoperc and vSMC was increased in the final half of the modified session compared with its initial half, and negatively correlated with articulation errors during articulation learning in the modified session. The present results suggest an essential role of the frontal regions, including the IFGoperc and vSMC, in articulation learning, with sensory feedback through area Spt and the IPL. The present study provides clues to the underlying pathology and treatment of childhood apraxia of speech

    Effects of stimulus response compatibility on covert imitation of vowels

    Get PDF
    When we observe someone else speaking, we tend to automatically activate the corresponding speech motor patterns. When listening, we therefore covertly imitate the observed speech. Simulation theories of speech perception propose that covert imitation of speech motor patterns supports speech perception. Covert imitation of speech has been studied with interference paradigms, including the stimulus–response compatibility paradigm (SRC). The SRC paradigm measures covert imitation by comparing articulation of a prompt following exposure to a distracter. Responses tend to be faster for congruent than for incongruent distracters; thus, showing evidence of covert imitation. Simulation accounts propose a key role for covert imitation in speech perception. However, covert imitation has thus far only been demonstrated for a select class of speech sounds, namely consonants, and it is unclear whether covert imitation extends to vowels. We aimed to demonstrate that covert imitation effects as measured with the SRC paradigm extend to vowels, in two experiments. We examined whether covert imitation occurs for vowels in a consonant–vowel–consonant context in visual, audio, and audiovisual modalities. We presented the prompt at four time points to examine how covert imitation varied over the distracter’s duration. The results of both experiments clearly demonstrated covert imitation effects for vowels, thus supporting simulation theories of speech perception. Covert imitation was not affected by stimulus modality and was maximal for later time points

    Mapping the speech code: Cortical responses linking the perception and production of vowels

    Get PDF
    The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulatio

    The Role of Primary Motor Cortex in Second Language Word Recognition

    Get PDF
    abstract: The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second language speech perception. The purpose of this study was to investigate the role of M1 associated with speech motor centers in processing acoustic inputs in the native (L1) and second language (L2), using repetitive Transcranial Magnetic Stimulation (rTMS) to selectively alter neural activity in M1. Thirty-six healthy English/Spanish bilingual subjects participated in the experiment. The performance on a listening word-to-picture matching task was measured before and after real- and sham-rTMS to the orbicularis oris (lip muscle) associated M1. Vowel Space Area (VSA) obtained from recordings of participants reading a passage in L2 before and after real-rTMS, was calculated to determine its utility as an rTMS aftereffect measure. There was high variability in the aftereffect of the rTMS protocol to the lip muscle among the participants. Approximately 50% of participants showed an inhibitory effect of rTMS, evidenced by smaller motor evoked potentials (MEPs) area, whereas the other 50% had a facilitatory effect, with larger MEPs. This suggests that rTMS has a complex influence on M1 excitability, and relying on grand-average results can obscure important individual differences in rTMS physiological and functional outcomes. Evidence of motor support to word recognition in the L2 was found. Participants showing an inhibitory aftereffect of rTMS on M1 produced slower and less accurate responses in the L2 task, whereas those showing a facilitatory aftereffect of rTMS on M1 produced more accurate responses in L2. In contrast, no effect of rTMS was found on the L1, where accuracy and speed were very similar after sham- and real-rTMS. The L2 VSA measure was indicative of the aftereffect of rTMS to M1 associated with speech production, supporting its utility as an rTMS aftereffect measure. This result revealed an interesting and novel relation between cerebral motor cortex activation and speech measures.Dissertation/ThesisDoctoral Dissertation Speech and Hearing Science 201

    Predicting while comprehending language:A theory and review

    Get PDF
    Researchers agree that comprehenders regularly predict upcoming language, but they do not always agree on what prediction is (and how to differentiate it from integration) or what constitutes evidence for it. After defining prediction, we show that it occurs at all linguistic levels from semantics to form, and then propose a theory of which mechanisms comprehenders use to predict. We argue that they most effectively predict using their production system (i.e., prediction-by-production): They covertly imitate the linguistic form of the speaker’s utterance and construct a representation of the underlying communicative intention. Comprehenders can then run this intention through their own production system to prepare the predicted utterance. But doing so takes time and resources, and comprehenders vary in the extent of preparation, with many groups of comprehenders (non-native speakers, illiterates, children, and older adults) using it less than typical native young adults. We thus argue that prediction-by-production is an optional mechanism, which is augmented by mechanisms based on association. Support for our proposal comes from many areas of research (electrophysiological, eye-tracking, and behavioral studies of reading, spoken language processing in the context of visual environments, speech processing, and dialogue)

    The cognitive and neural mechanisms involved in motor imagery of speech

    Get PDF
    Inner speech is a common phenomenon that influences motivation, problem-solving and self-awareness. Motor imagery of speech refers to the simulation of speech that gives rise to the experience of inner speech. Substantial evidence exists that several cortical areas are recruited in general motor imagery processes, including visual and speech motor imagery, but the evidence for primary motor cortex involvement is less clear. One influential model proposes that motor cortex is recruited during speech motor imagery, while another prominent model suggests motor cortex is bypassed. This thesis presents six experiments that explore the role of motor cortex in speech motor imagery. Experiments 1-3 build on established visual motor imagery tasks and expand these tasks to the speech motor imagery domain for the first time, using behavioural (experiments 1 and 2) and neuroimaging methods (experiment 3). Experiment 4 uses transcranial magnetic stimulation to explore motor cortex recruitment during a speech imagery condition, relative to a motor execution and baseline condition in hand and lip muscles. Experiments 5 and 6 use transcranial magnetic stimulation to explore speech motor imagery in tongue muscles relative to a hearing and a baseline condition. The results show that recruitment of motor cortex during speech motor imagery is modulated depending on task demands: simple speech stimuli do not recruit motor cortex, while complex speech stimuli are more likely to do so. The results have consequences specifically for models that always or never implicate motor cortex: it appears that complex stimuli require more active simulation than simple stimuli. In turn, the results suggest that complex inner speech experiences are linked to motor cortex recruitment. These findings have important ramifications for atypical populations whose inner speech experience may be impaired, such as those who experience auditory verbal hallucinations, or those with autism spectrum disorder

    Sensorimotor processing in speech examined in automatic imitation tasks

    Get PDF
    The origin of humans’ imitative capacity to quickly map observed actions onto their motor repertoire has been the source of much debate in cognitive psychology. Past research has provided a comprehensive account of how sensorimotor associative experience forges and modulates the imitative capacity underlying familiar, visually transparent manual gestures. Yet, little is known about whether the same associative mechanism is also involved in imitation of visually opaque orofacial movements or novel actions that were not part of the observers’ motor repertoire. This thesis aims to establish the role of sensorimotor experience in modulating the imitative capacity underlying communicative orofacial movements, namely speech actions, that are either familiar or novel to perceivers. Chapter 3 first establishes that automatic imitation of speech occurs due to perception- induced motor activation and thus can be used as a behavioural measure to index the imitative capacity underlying speech. Chapter 4 demonstrates that the flexibility observed for the imitative capacity underlying manual gestures extends to the imitative capacity underlying visually perceived speech actions, suggesting that the associative mechanism is also involved in imitation of visually opaque orofacial movements. Chapter 5 further shows that sensorimotor experience with novel speech actions modulates the imitative capacity underlying both novel and familiar speech actions produced using the same articulators. Thus, findings from Chapter 5 suggest that the associative mechanism is also involved in imitation of novel actions and that experience-induced modification probably occurs at the feature level in the perception-production link presumably underlying the imitative capacity. Results are discussed with respect to previous imitation research and more general action-perception research in cognitive and experimental psychology, sensorimotor interaction studies in speech science, and native versus non-native processing in second language research. Overall, it is concluded that the development of speech imitation follows the same basic associative learning rules as the development of imitation in other effector systems
    corecore