6,670 research outputs found
Attention-dependent modulation of neural activity in primary sensorimotor cortex
Although motor tasks at most times do not require much attention, there are findings that attention can alter neuronal activity not only in higher motor areas but also within the primary sensorimotor cortex. However, these findings are equivocal as attention effects were investigated only in either the dominant or the nondominant hand; attention was operationalized either as concentration (i.e., attention directed to motor task) or as distraction (i.e., attention directed away from motor task), the complexity of motor tasks varied and almost no left-handers were studied. Therefore, in this study, both right- and left-handers were investigated with an externally paced button press task in which subjects typed with the index finger of the dominant, nondominant, or both hands. We introduced four different attention levels: attention-modulation-free, distraction (counting backward), concentration on the moving finger, and divided concentration during bimanual movement. We found that distraction reduced neuronal activity in both contra- and ipsilateral primary sensorimotor cortex when the nondominant hand was tapping in both handedness groups. At the same time, distraction activated the dorsal frontoparietal attention network and deactivated the ventral default network. We conclude that difficulty and training status of both the motor and cognitive task, as well as usage of the dominant versus the nondominant hand, are crucial for the presence and magnitude of attention effects on sensorimotor cortex activity. In the case of a very simple button press task, attention modulation is seen for the nondominant hand under distraction and in both handedness groups
Articulating: the neural mechanisms of speech production
Speech production is a highly complex sensorimotor task involving tightly coordinated processing across large expanses of the cerebral cortex. Historically, the study of the neural underpinnings of speech suffered from the lack of an animal model. The development of non-invasive structural and functional neuroimaging techniques in the late 20th century has dramatically improved our understanding of the speech network. Techniques for measuring regional cerebral blood flow have illuminated the neural regions involved in various aspects of speech, including feedforward and feedback control mechanisms. In parallel, we have designed, experimentally tested, and refined a neural network model detailing the neural computations performed by specific neuroanatomical regions during speech. Computer simulations of the model account for a wide range of experimental findings, including data on articulatory kinematics and brain activity during normal and perturbed speech. Furthermore, the model is being used to investigate a wide range of communication disorders.R01 DC002852 - NIDCD NIH HHS; R01 DC007683 - NIDCD NIH HHS; R01 DC016270 - NIDCD NIH HHSAccepted manuscrip
Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production
This paper describes a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic, and neuroimaging data concerning the control of speech movements. The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and somatosensory cortical areas. Computer simulations of the model verify its ability to account for compensation to lip and jaw perturbations during speech. Specific anatomical locations of the model's components are estimated, and these estimates are used to simulate fMRI experiments of simple syllable production with and without jaw perturbations.National Institute on Deafness and Other Communication Disorders (R01 DC02852, RO1 DC01925
Similarities between explicit and implicit motor imagery in mental rotation of hands: an EEG study
Chronometric and imaging studies have shown that motor imagery is used implicitly during mental rotation tasks in which subjects for example judge the laterality of human hand pictures at various orientations. Since explicit motor imagery is known to activate the sensorimotor areas of the cortex, mental rotation is expected to do similar if it involves a form of motor imagery. So far, functional magnetic resonance imaging and positron emission tomography have been used to study mental rotation and less attention has been paid to electroencephalogram (EEG) which offers a high time-frequency resolution. The time-frequency analysis is an established method for studying explicit motor imagery. Although hand mental rotation is claimed to involve motor imagery, the time-frequency characteristics of mental rotation have never been compared with those of explicit motor imagery. In this study, time-frequency responses of EEG recorded during explicit motor imagery and during a mental rotation task, inducing implicit motor imagery, were compared. Fifteen right-handed healthy volunteers performed motor imagery of hands in one condition and hand laterality judgement tasks in another while EEG of the whole head was recorded. The hand laterality judgement was the mental rotation task used to induce implicit motor imagery. The time-frequency analysis and sLORETA localisation of the EEG showed that the activities in the sensorimotor areas had similar spatial and time-frequency characteristics in explicit motor imagery and implicit motor imagery conditions. Furthermore this sensorimotor activity was different for the left and for the right hand in both explicit and implicit motor imagery. This result supports that motor imagery is used during mental rotation and that it can be detected and studied with EEG technology. This result should encourage the use of mental rotation of body parts in rehabilitation programmes in a similar manner as motor imagery
Acoustic Space Learning for Sound Source Separation and Localization on Binaural Manifolds
In this paper we address the problems of modeling the acoustic space
generated by a full-spectrum sound source and of using the learned model for
the localization and separation of multiple sources that simultaneously emit
sparse-spectrum sounds. We lay theoretical and methodological grounds in order
to introduce the binaural manifold paradigm. We perform an in-depth study of
the latent low-dimensional structure of the high-dimensional interaural
spectral data, based on a corpus recorded with a human-like audiomotor robot
head. A non-linear dimensionality reduction technique is used to show that
these data lie on a two-dimensional (2D) smooth manifold parameterized by the
motor states of the listener, or equivalently, the sound source directions. We
propose a probabilistic piecewise affine mapping model (PPAM) specifically
designed to deal with high-dimensional data exhibiting an intrinsic piecewise
linear structure. We derive a closed-form expectation-maximization (EM)
procedure for estimating the model parameters, followed by Bayes inversion for
obtaining the full posterior density function of a sound source direction. We
extend this solution to deal with missing data and redundancy in real world
spectrograms, and hence for 2D localization of natural sound sources such as
speech. We further generalize the model to the challenging case of multiple
sound sources and we propose a variational EM framework. The associated
algorithm, referred to as variational EM for source separation and localization
(VESSL) yields a Bayesian estimation of the 2D locations and time-frequency
masks of all the sources. Comparisons of the proposed approach with several
existing methods reveal that the combination of acoustic-space learning with
Bayesian inference enables our method to outperform state-of-the-art methods.Comment: 19 pages, 9 figures, 3 table
Recommended from our members
The role of HG in the analysis of temporal iteration and interaural correlation
Self-directedness, integration and higher cognition
In this paper I discuss connections between self-directedness, integration and higher cognition. I present a model of self-directedness as a basis for approaching higher cognition from a situated cognition perspective. According to this model increases in sensorimotor complexity create pressure for integrative higher order control and learning processes for acquiring information about the context in which action occurs. This generates complex articulated abstractive information processing, which forms the major basis for higher cognition. I present evidence that indicates that the same integrative characteristics found in lower cognitive process such as motor adaptation are present in a range of higher cognitive process, including conceptual learning. This account helps explain situated cognition phenomena in humans because the integrative processes by which the brain adapts to control interaction are relatively agnostic concerning the source of the structure participating in the process. Thus, from the perspective of the motor control system using a tool is not fundamentally different to simply controlling an arm
Recommended from our members
Functional organization of human sensorimotor cortex for speech articulation.
Speaking is one of the most complex actions that we perform, but nearly all of us learn to do it effortlessly. Production of fluent speech requires the precise, coordinated movement of multiple articulators (for example, the lips, jaw, tongue and larynx) over rapid time scales. Here we used high-resolution, multi-electrode cortical recordings during the production of consonant-vowel syllables to determine the organization of speech sensorimotor cortex in humans. We found speech-articulator representations that are arranged somatotopically on ventral pre- and post-central gyri, and that partially overlap at individual electrodes. These representations were coordinated temporally as sequences during syllable production. Spatial patterns of cortical activity showed an emergent, population-level representation, which was organized by phonetic features. Over tens of milliseconds, the spatial patterns transitioned between distinct representations for different consonants and vowels. These results reveal the dynamic organization of speech sensorimotor cortex during the generation of multi-articulator movements that underlies our ability to speak
The Structure of Sensorimotor Explanation
The sensorimotor theory of vision and visual consciousness is often described as a radical alternative to the computational and connectionist orthodoxy in the study of visual perception. However, it is far from clear whether the theory represents a significant departure from orthodox approaches or whether it is an enrichment of it. In this study, I tackle this issue by focusing on the explanatory structure of the sensorimotor theory. I argue that the standard formulation of the theory subscribes to the same theses of the dynamical hypothesis and that it affords covering-law explanations. This however exposes the theory to the mere description worry and generates a puzzle about the role of representations. I then argue that the sensorimotor theory is compatible with a mechanistic framework, and show how this can overcome the mere description worry and solve the problem of the explanatory role of representations. By doing so, it will be shown that the theory should be understood as an enrichment of the orthodoxy, rather than an alternative
- …