2,451 research outputs found

    Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production

    Full text link
    This paper describes a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic, and neuroimaging data concerning the control of speech movements. The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and somatosensory cortical areas. Computer simulations of the model verify its ability to account for compensation to lip and jaw perturbations during speech. Specific anatomical locations of the model's components are estimated, and these estimates are used to simulate fMRI experiments of simple syllable production with and without jaw perturbations.National Institute on Deafness and Other Communication Disorders (R01 DC02852, RO1 DC01925

    The Nature of Consciousness in the Visually Deprived Brain

    Get PDF
    Vision plays a central role in how we represent and interact with the world around us. The primacy of vision is structurally imbedded in cortical organization as about one-third of the cortical surface in primates is involved in visual processes. Consequently, the loss of vision, either at birth or later in life, affects brain organization and the way the world is perceived and acted upon. In this paper, we address a number of issues on the nature of consciousness in people deprived of vision. Do brains from sighted and blind individuals differ, and how? How does the brain of someone who has never had any visual perception form an image of the external world? What is the subjective correlate of activity in the visual cortex of a subject who has never seen in life? More in general, what can we learn about the functional development of the human brain in physiological conditions by studying blindness? We discuss findings from animal research as well from recent psychophysical and functional brain imaging studies in sighted and blind individuals that shed some new light on the answers to these questions

    Conflict monitoring in speech processing: an fMRI study of error detection in speech production and perception

    Get PDF
    To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated whether internal verbal monitoring takes place through the speech perception system, as proposed by perception-based theories of speech monitoring, or whether mechanisms independent of perception are applied, as proposed by production-based theories of speech monitoring. With the use of fMRI during a tongue twister task we observed that error detection in internal speech during noise-masked overt speech production and error detection in speech perception both recruit the same neural network, which includes pre-supplementary motor area (pre-SMA), dorsal anterior cingulate cortex (dACC), anterior insula (AI), and inferior frontal gyrus (IFG). Although production and perception recruit similar areas, as proposed by perception-based accounts, we did not find activation in superior temporal areas (which are typically associated with speech perception) during internal speech monitoring in speech production as hypothesized by these accounts. On the contrary, results are highly compatible with a domain general approach to speech monitoring, by which internal speech monitoring takes place through detection of conflict between response options, which is subsequently resolved by a domain general executive center (e.g., the ACC)

    Force Amplitude Modulation of Tongue and Hand Movements

    Get PDF
    Rapid, precise movements of the hand and tongue are necessary to complete a wide range of tasks in everyday life. However, the understanding of normal neural control of force production is limited, particularly for the tongue. Functional neuroimaging studies of incremental hand pressure production in healthy adults revealed scaled activations in the basal ganglia, but no imaging studies of tongue force regulation have been reported. The purposes of this study were (1) to identify the neural substrates controlling tongue force for speech and nonspeech tasks, (2) to determine which activations scaled to the magnitude of force produced, and (3) to assess whether positional modifications influenced maximum pressures and accuracy of pressure target matching for hand and tongue movements. Healthy older adults compressed small plastic bulbs in the oral cavity (for speech and nonspeech tasks) and in the hand at specified fractions of maximum voluntary contraction while magnetic resonance images were acquired. Volume of interest analysis at individual and group levels outlined a network of neural substrates controlling tongue speech and nonspeech movements. Repeated measures analysis revealed differences in percentage signal change and activation volume across task and effort level in some brain regions. Actual pressures and the accuracy of pressure matching were influenced by effort level in all tasks and body position in the hand squeeze task. The current results can serve as a basis of comparison for tongue movement control in individuals with neurological disease. Group differences in motor control mechanisms may help explain differential response of limb and tongue movements to medical interventions (as occurs in Parkinson disease) and ultimately may lead to more focused intervention for dysarthria in several conditions such as PD

    Mapping the Spatial and Temporal Dynamics of Sensorimotor Integration During the Perception and Performance of Wallowing

    Get PDF
    Similar to other complex sequences of muscle activity, swallowing relies heavily upon ‘sensorimotor integration.’ It is well known that the premotor cortex and primary sensorimotor cortices provide critical sensorimotor contributions that help control the strength and timing of swallowing muscle effectors. However, the temporal dynamics of sensorimotor integration remains unclear, even when performed normally without neurological compromise. Recent advances in EEG analysis blind source separation techniques via independent component analysis offer a novel and exciting opportunity to measure cortical sensorimotor activity in realtime during swallowing, concurrently with muscle activity during swallow initiation. In the current study, mu components were identified, with characteristic alpha (~10 Hz) and beta (~20 Hz) frequency bands. Spectral power within these frequency bands are known to index somatosensory and motor activity, respectively. Twenty-five adult participants produced swallowing and tongue tapping (motor control) tasks. Additionally they were asked to watch a video depicting swallowing and a scrambled kaleidoscope (perceptual control) version of this same video. Independent component analysis of raw EEG signals identified bilateral clusters of mu components, maximally localized to the premotor cortex (BA6) in 19 participants during the production and the perception tasks. Event related spectral perturbation (ERSP) analysis was used to identify spectral power within alpha and beta peaks of the mu cluster across time. Alpha and beta event-related desynchronization (ERD), indicative of somatosensory and motor activity, was revealed for both tongue tapping and swallowing beginning at ~500 ms following a visual cue to “go.” However, the patterns of ERD are stronger (pFD

    Inferior frontal oscillations reveal visuo-motor matching for actions and speech: evidence from human intracranial recordings.

    Get PDF
    The neural correspondence between the systems responsible for the execution and recognition of actions has been suggested both in humans and non-human primates. Apart from being a key region of this visuo-motor observation-execution matching (OEM) system, the human inferior frontal gyrus (IFG) is also important for speech production. The functional overlap of visuo-motor OEM and speech, together with the phylogenetic history of the IFG as a motor area, has led to the idea that speech function has evolved from pre-existing motor systems and to the hypothesis that an OEM system may exist also for speech. However, visuo-motor OEM and speech OEM have never been compared directly. We used electrocorticography to analyze oscillations recorded from intracranial electrodes in human fronto-parieto-temporal cortex during visuo-motor (executing or visually observing an action) and speech OEM tasks (verbally describing an action using the first or third person pronoun). The results show that neural activity related to visuo-motor OEM is widespread in the frontal, parietal, and temporal regions. Speech OEM also elicited widespread responses partly overlapping with visuo-motor OEM sites (bilaterally), including frontal, parietal, and temporal regions. Interestingly a more focal region, the inferior frontal gyrus (bilaterally), showed both visuo-motor OEM and speech OEM properties independent of orolingual speech-unrelated movements. Building on the methodological advantages in human invasive electrocorticography, the present findings provide highly precise spatial and temporal information to support the existence of a modality-independent action representation system in the human brain that is shared between systems for performing, interpreting and describing actions

    Magnetic resonance imaging of the brain and vocal tract:Applications to the study of speech production and language learning

    Get PDF
    The human vocal system is highly plastic, allowing for the flexible expression of language, mood and intentions. However, this plasticity is not stable throughout the life span, and it is well documented that adult learners encounter greater difficulty than children in acquiring the sounds of foreign languages. Researchers have used magnetic resonance imaging (MRI) to interrogate the neural substrates of vocal imitation and learning, and the correlates of individual differences in phonetic “talent”. In parallel, a growing body of work using MR technology to directly image the vocal tract in real time during speech has offered primarily descriptive accounts of phonetic variation within and across languages. In this paper, we review the contribution of neural MRI to our understanding of vocal learning, and give an overview of vocal tract imaging and its potential to inform the field. We propose methods by which our understanding of speech production and learning could be advanced through the combined measurement of articulation and brain activity using MRI – specifically, we describe a novel paradigm, developed in our laboratory, that uses both MRI techniques to for the first time map directly between neural, articulatory and acoustic data in the investigation of vocalisation. This non-invasive, multimodal imaging method could be used to track central and peripheral correlates of spoken language learning, and speech recovery in clinical settings, as well as provide insights into potential sites for targeted neural interventions

    Secuencia de actividad cerebral relacionada con la denominación de caras y el fenómeno de la punta de la lengua

    Get PDF
    Active brain areas and their temporal sequence of activation during the successful retrieval and naming of famous faces (KNOW) and during the tip-of-the-tongue (TOT) state were studied by means of low resolution electromagnetic tomographic analysis (LORETA) applied to event-related potentials. The results provide evidence that adequate activation of a neural network during the fi rst 500 ms following presentation of the photograph —mainly involving the posterior temporal region, the insula, lateral and medial prefrontal areas and the medial temporal lobe— is associated with successful retrieval of lexical-phonological information about the person’s name. Signifi cant differences between conditions were observed in the 538-698-ms interval; specifi cally there was greater activation of the anterior cingulate gyrus (ACC) towards the supplementary motor area (SMA) in the KNOW than in the TOT condition, possibly in relation to the motor response and as a consequence of the successful retrieval of lexical-phonological information about the personLas áreas cerebrales más activas y su secuencia de activación durante el recuerdo y la denominación exitosa de caras (Condición SI) y durante el fenómeno de la punta de la lengua (Condición PDL) fueron estimadas a partir de potenciales evocados mediante tomografías electromagnéticas de baja resolución (LORETA). Los resultados muestran evidencia de que una adecuada activación de una red neural (estando principalmente implicadas áreas temporales posteriores, insula, áreas prefrontales mediales y laterales, y áreas temporales mediales) durante los primeros 500 ms después de la presentación de la cara está relacionada con la recuperación exitosa de información léxico-fonológica sobre el nombre de la persona. Además se obtuvieron diferencias significativas entre ambas condiciones en el intervalo 538-698 ms; concretamente, el giro cingulado anterior y el área motora suplementaria mostraron una mayor activación en la Condición SI que en la Condición PDL, posiblemente relacionada con la respuesta motora y como consecuencia de la recuperación exitosa de la información léxico-fonológica sobre la personaThis work was financially supported by the Spanish Ministerio de Educación y Ciencia (SEF2007-67964-C02-02), and Galician Consellería de Innovación e Industria (PGIDIT07PXIB211018PR)S
    corecore