31 research outputs found

    Target Selection for Reaching and Saccades Share a Similar Behavioral Reference Frame in the Macaque

    Get PDF
    The selection of one of two visual stimuli as a target for a motor action may depend on external as well as internal variables. We examined whether the preference to select a leftward or rightward target depends on the action that is performed (eye or arm movement) and to what extent the choice is influenced by the target location. Two targets were presented at the same distance to the left and right of a fixation position and the stimulus onset asynchrony (SOA) was adjusted until both targets were selected equally often. This balanced SOA time is then a quantitative measure of selection preference. In two macaque monkeys tested, we found the balanced SOA shifted to the left side for left-arm movements and to the right side for right-arm movements. Target selection strongly depended on the horizontal target location. By varying eye, head, and trunk position, we found this dependency embedded in a head-centered behavioral reference frame for saccade targets and, somewhat counter-intuitively, for reach targets as well. Target selection for reach movements was influenced by the eye position, while saccade target selection was unaffected by the arm position. These findings suggest that the neural processes underlying target selection for a reaching movement are to a large extent independent of the coordinate frame ultimately used to make the limb movement, but are instead closely linked to the coordinate frame used to plan a saccade to that target. This similarity may be indicative of a common spatial framework for hand-eye coordination

    Shared functional connectivity between the dorso-medial and dorso-ventral streams in macaques

    Get PDF
    © 2020, The Author(s). Manipulation of an object requires us to transport our hand towards the object (reach) and close our digits around that object (grasp). In current models, reach-related information is propagated in the dorso-medial stream from posterior parietal area V6A to medial intraparietal area, dorsal premotor cortex, and primary motor cortex. Grasp-related information is processed in the dorso-ventral stream from the anterior intraparietal area to ventral premotor cortex and the hand area of primary motor cortex. However, recent studies have cast doubt on the validity of this separation in separate processing streams. We investigated in 10 male rhesus macaques the whole-brain functional connectivity of these areas using resting state fMRI at 7-T. Although we found a clear separation between dorso-medial and dorso-ventral network connectivity in support of the two-stream hypothesis, we also found evidence of shared connectivity between these networks. The dorso-ventral network was distinctly correlated with high-order somatosensory areas and feeding related areas, whereas the dorso-medial network with visual areas and trunk/hindlimb motor areas. Shared connectivity was found in the superior frontal and precentral gyrus, central sulcus, intraparietal sulcus, precuneus, and insular cortex. These results suggest that while sensorimotor processing streams are functionally separated, they can access information through shared areas

    Deep Learning for real-time neural decoding of grasp

    Full text link
    Neural decoding involves correlating signals acquired from the brain to variables in the physical world like limb movement or robot control in Brain Machine Interfaces. In this context, this work starts from a specific pre-existing dataset of neural recordings from monkey motor cortex and presents a Deep Learning-based approach to the decoding of neural signals for grasp type classification. Specifically, we propose here an approach that exploits LSTM networks to classify time series containing neural data (i.e., spike trains) into classes representing the object being grasped. The main goal of the presented approach is to improve over state-of-the-art decoding accuracy without relying on any prior neuroscience knowledge, and leveraging only the capability of deep learning models to extract correlations from data. The paper presents the results achieved for the considered dataset and compares them with previous works on the same dataset, showing a significant improvement in classification accuracy, even if considering simulated real-time decoding

    NFDI-Neuro: building a community for neuroscience research data management in Germany

    Get PDF
    Increasing complexity and volume of research data pose increasing challenges for scientists to manage their data efficiently. At the same time, availability and reuse of research data are becoming more and more important in modern science. The German government has established an initiative to develop research data management (RDM) and to increase accessibility and reusability of research data at the national level, the Nationale Forschungsdateninfrastruktur (NFDI). The NFDI Neuroscience (NFDI-Neuro) consortium aims to represent the neuroscience community in this initiative. Here, we review the needs and challenges in RDM faced by researchers as well as existing and emerging solutions and benefits, and how the NFDI in general and NFDI-Neuro specifically can support a process for making these solutions better available to researchers. To ensure development of sustainable research data management practices, both technical solutions and engagement of the scientific community are essential. NFDI-Neuro is therefore focusing on community building just as much as on improving the accessibility of technical solutions

    Representation of continuous hand and arm movements in macaque areas M1, F5, and AIP: a comparative decoding study.

    No full text
    OBJECTIVE: In the last decade, multiple brain areas have been investigated with respect to their decoding capability of continuous arm or hand movements. So far, these studies have mainly focused on motor or premotor areas like M1 and F5. However, there is accumulating evidence that anterior intraparietal area (AIP) in the parietal cortex also contains information about continuous movement. APPROACH: In this study, we decoded 27 degrees of freedom representing complete hand and arm kinematics during a delayed grasping task from simultaneously recorded activity in areas M1, F5, and AIP of two macaque monkeys (Macaca mulatta). MAIN RESULTS: We found that all three areas provided decoding performances that lay significantly above chance. In particular, M1 yielded highest decoding accuracy followed by F5 and AIP. Furthermore, we provide support for the notion that AIP does not only code categorical visual features of objects to be grasped, but also contains a substantial amount of temporal kinematic information. SIGNIFICANCE: This fact could be utilized in future developments of neural interfaces restoring hand and arm movements.peerReviewe

    Linear model.

    No full text
    <p>A. Percentage of all LFP sites (both animals, N = 246) that have a significant coefficient for grip type in the various frequency bands (slow, beta, low gamma, and high gamma band) and task epochs B. Percentage of sites with a significant spatial position coefficient. C. White bars indicate percentage of sites with significant spatial coefficients averaged across bands and animals for the different epochs fixation, cue, plan, and movement. Colored lines indicate fraction of sites with spatial coefficients for target (green), gaze (blue), and for both (red). D. Directional tuning of target modulated sites (all bands) for the task epochs fixation, cue, plan, and movement, as revealed by the linear fit. Tuning directions are derived from the target coefficient vectors (green: target modulated; red: target and gaze modulated). E. Directional tuning of gaze modulated sites (blue: gaze modulated; red: target and gaze modulated). F. Scatter plots of spatially tuned sites illustrating angular orientation difference (y-axis) between target (green) and gaze position vectors (blue) against the length contrast (LC) of these vectors (x-axis). Sites with significant target and gaze modulation (red) were considered retinotopic if the coefficient vectors were of comparable length (| LC | < 0.33) and oriented in nearly opposite direction (angular difference < 135deg), as indicated by the black rectangles.</p

    Decoding simulation.

    No full text
    <p>Simulated decoding performance of grip type and spatial factors for different frequency bands (slow: green, beta: red, low gamma: blue, high gamma: cyan, and all bands combined: black curves) and both animals (animal P: A-E, animal S: F-J). Individual panels show the decoding performance for grip type (A, F), the13 different spatial conditions (B,G), as well as for target (C,H), gaze (D,I), and retinotopic target position (E,J) separately for all task epochs. Dashed horizontal lines indicate chance level, and error bars the standard deviation after 100 simulated decoding repetitions.</p

    Cortical Local Field Potential Encodes Movement Intentions in the Posterior Parietal Cortex

    Get PDF
    The cortical local field potential (LFP) is a summation signal of excitatory and inhibitory dendritic potentials that has recently become of increasing interest. We report that LFP signals in the parietal reach region (PRR) of the posterior parietal cortex of macaque monkeys have temporal structure that varies with the type of planned or executed motor behavior. LFP signals from PRR provide better decode performance for reaches compared to saccades and have stronger coherency with simultaneously recorded spiking activity during the planning of reach movements than during saccade planning. LFP signals predict the animal’s behavioral state (e.g., planning a reach or saccade) and the direction of the currently planned movement from single-trial information. This new evidence provides further support for a role of the parietal cortex in movement planning and the potential application of LFP signals for a brain-machine interface
    corecore