667 research outputs found

    Functional brain outcomes of L2 speech learning emerge during sensorimotor transformation

    Get PDF
    Sensorimotor transformation (ST) may be a critical process in mapping perceived speech input onto non-native (L2) phonemes, in support of subsequent speech production. Yet, little is known concerning the role of ST with respect to L2 speech, particularly where learned L2 phones (e.g., vowels) must be produced in more complex lexical contexts (e.g., multi-syllabic words). Here, we charted the behavioral and neural outcomes of producing trained L2 vowels at word level, using a speech imitation paradigm and functional MRI. We asked whether participants would be able to faithfully imitate trained L2 vowels when they occurred in non-words of varying complexity (one or three syllables). Moreover, we related individual differences in imitation success during training to BOLD activation during ST (i.e., pre-imitation listening), and during later imitation. We predicted that superior temporal and peri-Sylvian speech regions would show increased activation as a function of item complexity and non-nativeness of vowels, during ST. We further anticipated that pre-scan acoustic learning performance would predict BOLD activation for non-native (vs. native) speech during ST and imitation. We found individual differences in imitation success for training on the non-native vowel tokens in isolation; these were preserved in a subsequent task, during imitation of mono- and trisyllabic words containing those vowels. fMRI data revealed a widespread network involved in ST, modulated by both vowel nativeness and utterance complexity: superior temporal activation increased monotonically with complexity, showing greater activation for non-native than native vowels when presented in isolation and in trisyllables, but not in monosyllables. Individual differences analyses showed that learning versus lack of improvement on the non-native vowel during pre-scan training predicted increased ST activation for non-native compared with native items, at insular cortex, pre-SMA/SMA, and cerebellum. Our results hold implications for the importance of ST as a process underlying successful imitation of non-native speech

    Magnetic resonance imaging of the brain and vocal tract:Applications to the study of speech production and language learning

    Get PDF
    The human vocal system is highly plastic, allowing for the flexible expression of language, mood and intentions. However, this plasticity is not stable throughout the life span, and it is well documented that adult learners encounter greater difficulty than children in acquiring the sounds of foreign languages. Researchers have used magnetic resonance imaging (MRI) to interrogate the neural substrates of vocal imitation and learning, and the correlates of individual differences in phonetic “talent”. In parallel, a growing body of work using MR technology to directly image the vocal tract in real time during speech has offered primarily descriptive accounts of phonetic variation within and across languages. In this paper, we review the contribution of neural MRI to our understanding of vocal learning, and give an overview of vocal tract imaging and its potential to inform the field. We propose methods by which our understanding of speech production and learning could be advanced through the combined measurement of articulation and brain activity using MRI – specifically, we describe a novel paradigm, developed in our laboratory, that uses both MRI techniques to for the first time map directly between neural, articulatory and acoustic data in the investigation of vocalisation. This non-invasive, multimodal imaging method could be used to track central and peripheral correlates of spoken language learning, and speech recovery in clinical settings, as well as provide insights into potential sites for targeted neural interventions

    Singers show enhanced performance and neural representation of vocal imitation

    Get PDF
    Humans have a remarkable capacity to finely control the muscles of the larynx, via distinct patterns of cortical topography and innervation that may underpin our sophisticated vocal capabilities compared with non-human primates. Here, we investigated the behavioural and neural correlates of laryngeal control, and their relationship to vocal expertise, using an imitation task that required adjustments of larynx musculature during speech. Highly trained human singers and non-singer control participants modulated voice pitch and vocal tract length (VTL) to mimic auditory speech targets, while undergoing real-time anatomical scans of the vocal tract and functional scans of brain activity. Multivariate analyses of speech acoustics, larynx movements and brain activation data were used to quantify vocal modulation behaviour and to search for neural representations of the two modulated vocal parameters during the preparation and execution of speech. We found that singers showed more accurate task-relevant modulations of speech pitch and VTL (i.e. larynx height, as measured with vocal tract MRI) during speech imitation; this was accompanied by stronger representation of VTL within a region of the right somatosensory cortex. Our findings suggest a common neural basis for enhanced vocal control in speech and song. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’

    Singers show enhanced performance and neural representation of vocal imitation

    Get PDF
    Humans have a remarkable capacity to finely control the muscles of the larynx, via distinct patterns of cortical topography and innervation that may underpin our sophisticated vocal capabilities compared with non-human primates. Here, we investigated the behavioural and neural correlates of laryngeal control, and their relationship to vocal expertise, using an imitation task that required adjustments of larynx musculature during speech. Highly trained human singers and non-singer control participants modulated voice pitch and vocal tract length (VTL) to mimic auditory speech targets, while undergoing real-time anatomical scans of the vocal tract and functional scans of brain activity. Multivariate analyses of speech acoustics, larynx movements and brain activation data were used to quantify vocal modulation behaviour and to search for neural representations of the two modulated vocal parameters during the preparation and execution of speech. We found that singers showed more accurate task-relevant modulations of speech pitch and VTL (i.e. larynx height, as measured with vocal tract MRI) during speech imitation; this was accompanied by stronger representation of VTL within a region of the right somatosensory cortex. Our findings suggest a common neural basis for enhanced vocal control in speech and song. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’

    Sensorimotor processing in speech examined in automatic imitation tasks

    Get PDF
    The origin of humans’ imitative capacity to quickly map observed actions onto their motor repertoire has been the source of much debate in cognitive psychology. Past research has provided a comprehensive account of how sensorimotor associative experience forges and modulates the imitative capacity underlying familiar, visually transparent manual gestures. Yet, little is known about whether the same associative mechanism is also involved in imitation of visually opaque orofacial movements or novel actions that were not part of the observers’ motor repertoire. This thesis aims to establish the role of sensorimotor experience in modulating the imitative capacity underlying communicative orofacial movements, namely speech actions, that are either familiar or novel to perceivers. Chapter 3 first establishes that automatic imitation of speech occurs due to perception- induced motor activation and thus can be used as a behavioural measure to index the imitative capacity underlying speech. Chapter 4 demonstrates that the flexibility observed for the imitative capacity underlying manual gestures extends to the imitative capacity underlying visually perceived speech actions, suggesting that the associative mechanism is also involved in imitation of visually opaque orofacial movements. Chapter 5 further shows that sensorimotor experience with novel speech actions modulates the imitative capacity underlying both novel and familiar speech actions produced using the same articulators. Thus, findings from Chapter 5 suggest that the associative mechanism is also involved in imitation of novel actions and that experience-induced modification probably occurs at the feature level in the perception-production link presumably underlying the imitative capacity. Results are discussed with respect to previous imitation research and more general action-perception research in cognitive and experimental psychology, sensorimotor interaction studies in speech science, and native versus non-native processing in second language research. Overall, it is concluded that the development of speech imitation follows the same basic associative learning rules as the development of imitation in other effector systems

    Semantic radical consistency and character transparency effects in Chinese: an ERP study

    Get PDF
    BACKGROUND: This event-related potential (ERP) study aims to investigate the representation and temporal dynamics of Chinese orthography-to-semantics mappings by simultaneously manipulating character transparency and semantic radical consistency. Character components, referred to as radicals, make up the building blocks used dur...postprin

    An open-source toolbox for measuring vocal tract shape from real-time magnetic resonance images

    Get PDF
    Real-time magnetic resonance imaging (rtMRI) is a technique that provides high-contrast videographic data of human anatomy in motion. Applied to the vocal tract, it is a powerful method for capturing the dynamics of speech and other vocal behaviours by imaging structures internal to the mouth and throat. These images provide a means of studying the physiological basis for speech, singing, expressions of emotion, and swallowing that are otherwise not accessible for external observation. However, taking quantitative measurements from these images is notoriously difficult. We introduce a signal processing pipeline that produces outlines of the vocal tract from the lips to the larynx as a quantification of the dynamic morphology of the vocal tract. Our approach performs simple tissue classification, but constrained to a researcher-specified region of interest. This combination facilitates feature extraction while retaining the domain-specific expertise of a human analyst. We demonstrate that this pipeline generalises well across datasets covering behaviours such as speech, vocal size exaggeration, laughter, and whistling, as well as producing reliable outcomes across analysts, particularly among users with domain-specific expertise. With this article, we make this pipeline available for immediate use by the research community, and further suggest that it may contribute to the continued development of fully automated methods based on deep learning algorithms

    Electrocorticography is superior to subthalamic local field potentials for movement decoding in Parkinson’s disease

    Get PDF
    Brain signal decoding promises significant advances in the development of clinical brain computer interfaces (BCI). In Parkinson's disease (PD), first bidirectional BCI implants for adaptive deep brain stimulation (DBS) are now available. Brain signal decoding can extend the clinical utility of adaptive DBS but the impact of neural source, computational methods and PD pathophysiology on decoding performance are unknown. This represents an unmet need for the development of future neurotechnology. To address this, we developed an invasive brain-signal decoding approach based on intraoperative sensorimotor electrocorticography (ECoG) and subthalamic LFP to predict grip-force, a representative movement decoding application, in 11 PD patients undergoing DBS. We demonstrate that ECoG is superior to subthalamic LFP for accurate grip-force decoding. Gradient boosted decision trees (XGBOOST) outperformed other model architectures. ECoG based decoding performance negatively correlated with motor impairment, which could be attributed to subthalamic beta bursts in the motor preparation and movement period. This highlights the impact of PD pathophysiology on the neural capacity to encode movement vigor. Finally, we developed a connectomic analysis that could predict grip-force decoding performance of individual ECoG channels across patients by using their connectomic fingerprints. Our study provides a neurophysiological and computational framework for invasive brain signal decoding to aid the development of an individualized precision-medicine approach to intelligent adaptive DBS

    Visuomotor integration and visuomotor skill learning depend on local plasticity in visual cortex during development

    Get PDF
    Visuomotor experience shapes responses in visual cortex during development. Coupling between movement and visual feedback establishes a comparator circuit between top-down and bottom-up inputs in layer 2/3 of mouse primary visual cortex (V1). Such a circuit is capable of computing prediction error responses in layer 2/3 excitatory neurons in V1. Given that visual cortex receives both the bottom-up visual input and signals consistent with a top-down prediction of visual flow given movement, it has been speculated that visual cortex is a site of integration of these two signals. If correct, we would predict that perturbing plasticity in V1 during development should prevent the establishment of a normal balance between bottom-up and top-down input, and consequently an impairment of visuomotor prediction errors in layer 2/3 neurons of primary visual cortex. In Chapter I, we tested whether local plasticity in visual cortex is necessary for the establishment of this balance by locally perturbing neural plasticity. Our results show that perturbing NMDA receptor-dependent plasticity during development of the visual system leads to a reduction in visuomotor prediction error responses, and that plasticity in V1 is crucial for the development of normal visuomotor integration. In Chapter II, we further investigated the balance of top-down and bottom-up inputs in V1 and ask, given that pro-psychotic agents (e.g., hallucinogens) can influence visual cortex activity, whether antipsychotic drugs also induce common circuit changes. We investigated three antipsychotic drugs: Haloperidol, Clozapine and Aripiprazole, with the aim of identifying a common functional signature, possibly underpinning their clinical efficacy. The most common change was a decrease in visuomotor prediction errors in layer 2/3 neurons. Clozapine, as one of most effective drugs, decreased activity of inhibitory neurons thought to mediate visual feedforward signals and increased the mean activity in layer 5. Overall, however, we did not find common changes in all of these three antipsychotic drugs

    Conversational Movement Dynamics and Nonverbal Indicators of Second Language Development: A Microgenetic Approach

    Full text link
    This dissertation study extends on current understandings of gesture and embodied interaction with the eco-social environment in second language development (SLD) while introducing new aspects of movement analysis through dynamical modeling. To understand the role of embodiment during learning activities, a second language learning task has been selected. Dyads consisting of a non-native English-speaking student and a native English-speaking tutor were video recorded during writing consultations centered on class assignments provided by the student. Cross-recurrence quantification analysis was used to measure interactional movement synchrony between the members of each dyad. Results indicate that students with varied English proficiency levels synchronize movements with their tutors over brief, frequent periods of time. Synchronous movement pattern complexity is highly variable across and within the dyads. Additionally, co-speech gesture and gesture independent of speech were analyzed qualitatively to identify the role of gesture as related to SLD events. A range of movement types were used during developmental events by the students and tutors to interact with their partner. The results indicated that language development occurs within a movement rich context through negotiated interaction which depends on a combination of synchronized and synergistic movements. Synchronized movements exhibited complex, dynamical behaviors including variability, self-organization, and emergent properties. Synergistic movement emergence revealed how the dualistic presence of the self/other in each dyad creates a functioning intersubjective space. Overall, the dyads demonstrated that movement is a salient factor in the writing consultation activity
    • 

    corecore