147 research outputs found

    Haptic foundations for visually guided action

    Get PDF
    Prehension is proposed to consist of two movements mediated by separate neural pathways – a Reach transports the hand to the target while a Grasp shapes the hand for target purchase – but under vision the two movements appear as a seemless act. The purpose of the present thesis was to examine prehension under conditions of limited visual feedback. Removing vision in adults caused prehension to decompose into an open handed Reach followed by a haptically mediated Grasp, suggesting that haptics also access the Reach and Grasp pathways. That Grasp, but not Reach, formation is equally accurate under haptic versus visual control indicates that the sensory control of the two movements can be differentiated. Finally, young infants perform haptic Reach and Grasp movements before integrating them together under vision. These results suggest that the Reach and the Grasp, with their requisite neural pathways, originate under haptic control with secondary access by vision.AIHS, NSER

    Different evolutionary origins for the reach and the grasp: an explanation for dual visuomotor channels in primate parietofrontal cortex

    Get PDF
    Sherpa Romeo green journal; open accessThe Dual Visuomotor Channel Theory proposes that manual prehension consists of two temporally integrated movements, each subserved by distinct visuomotor pathways in occipitoparietofrontal cortex. The Reach is mediated by a dorsomedial pathway and transports the hand in relation to the target’s extrinsic properties (i.e., location and orientation). The Grasp is mediated by a dorsolateral pathway and opens, preshapes, and closes the hand in relation to the target’s intrinsic properties (i.e., size and shape). Here, neuropsychological, developmental, and comparative evidence is reviewed to show that the Reach and the Grasp have different evolutionary origins. First, the removal or degradation of vision causes prehension to decompose into its constituent Reach and Grasp components, which are then executed in sequence or isolation. Similar decomposition occurs in optic ataxic patients following cortical injury to the Reach and the Grasp pathways and after corticospinal tract lesions in non-human primates. Second, early non-visual PreReach and PreGrasp movements develop into mature Reach and Grasp movements but are only integrated under visual control after a prolonged developmental period. Third, comparative studies reveal many similarities between stepping movements and the Reach and between food handling movements and the Grasp, suggesting that the Reach and the Grasp are derived from different evolutionary antecedents. The evidence is discussed in relation to the ideas that dual visuomotor channels in primate parietofrontal cortex emerged as a result of distinct evolutionary origins for the Reach and the Grasp; that foveated vision in primates serves to integrate the Reach and the Grasp into a single prehensile act; and, that flexible recombination of discrete Reach and Grasp movements under various forms of sensory and cognitive control can produce adaptive behaviorYe

    Neural correlates of grasping

    Get PDF
    Prehension, the capacity to reach and grasp objects, comprises two main components: reaching, i.e., moving the hand towards an object, and grasping, i.e., shaping the hand with respect to its properties. Knowledge of this topic has gained a huge advance in recent years, dramatically changing our view on how prehension is represented within the dorsal stream. While our understanding of the various nodes coding the grasp component is rapidly progressing, little is known of the integration between grasping and reaching. With this Mini Review we aim to provide an up-to-date overview of the recent developments on the coding of prehension. We will start with a description of the regions coding various aspects of grasping in humans and monkeys, delineating where it might be integrated with reaching. To gain insights into the causal role of these nodes in the coding of prehension, we will link this functional description to lesion studies. Finally, we will discuss future directions that might be promising to unveil new insights on the coding of prehension movements

    Activity in ventral premotor cortex is modulated by vision of own hand in action

    Get PDF
    Parietal and premotor cortices of the macaque monkey contain distinct populations of neurons which, in addition to their motor discharge, are also activated by visual stimulation. Among these visuomotor neurons, a population of grasping neurons located in the anterior intraparietal area (AIP) shows discharge modulation when the own hand is visible during object grasping. Given the dense connections between AIP and inferior frontal regions, we aimed at investigating whether two hand-related frontal areas, ventral premotor area F5 and primary motor cortex (area F1), contain neurons with similar properties. Two macaques were involved in a grasping task executed in various light/dark conditions in which the to-be-grasped object was kept visible by a dim retro-illumination. Approximately 62% of F5 and 55% of F1 motor neurons showed light/dark modulations. To better isolate the effect of hand-related visual input, we introduced two further conditions characterized by kinematic features similar to the dark condition. The scene was briefly illuminated (i) during hand preshaping (pre-touch flash, PT-flash) and (ii) at hand-object contact (touch flash, T-flash). Approximately 48% of F5 and 44% of F1 motor neurons showed a flash-related modulation. Considering flash-modulated neurons in the two flash conditions, ∼40% from F5 and ∼52% from F1 showed stronger activity in PT- than T-flash (PT-flash-dominant), whereas ∼60% from F5 and ∼48% from F1 showed stronger activity in T- than PT-flash (T-flash-dominant). Furthermore, F5, but not F1, flash-dominant neurons were characterized by a higher peak and mean discharge in the preferred flash condition as compared to light and dark conditions. Still considering F5, the distribution of the time of peak discharge was similar in light and preferred flash conditions. This study shows that the frontal cortex contains neurons, previously classified as motor neurons, which are sensitive to the observation of meaningful phases of the own grasping action. We conclude by discussing the possible functional role of these populations

    Decoding motor intentions from human brain activity

    Get PDF
    “You read my mind.” Although this simple everyday expression implies ‘knowledge or understanding’ of another’s thinking, true ‘mind-reading’ capabilities implicitly seem constrained to the domains of Hollywood and science-fiction. In the field of sensorimotor neuroscience, however, significant progress in this area has come from mapping characteristic changes in brain activity that occur prior to an action being initiated. For instance, invasive neural recordings in non-human primates have significantly increased our understanding of how highly cognitive and abstract processes like intentions and decisions are represented in the brain by showing that it is possible to decode or ‘predict’ upcoming sensorimotor behaviors (e.g., movements of the arm/eyes) based on preceding changes in the neuronal output of parieto-frontal cortex, a network of areas critical for motor planning. In the human brain, however, a successful counterpart for this predictive ability and a similar detailed understanding of intention-related signals in parieto-frontal cortex have remained largely unattainable due to the limitations of non-invasive brain mapping techniques like functional magnetic resonance imaging (fMRI). Knowing how and where in the human brain intentions or plans for action are coded is not only important for understanding the neuroanatomical organization and cortical mechanisms that govern goal-directed behaviours like reaching, grasping and looking – movements critical to our interactions with the world – but also for understanding homologies between human and non-human primate brain areas, allowing the transfer of neural findings between species. In the current thesis, I employed multi-voxel pattern analysis (MVPA), a new fMRI technique that has made it possible to examine the coding of neural information at a more fine-grained level than that previously available. I used fMRI MVPA to examine how and where movement intentions are coded in human parieto-frontal cortex and specifically asked the question: What types of predictive information about a subject\u27s upcoming movement can be decoded from preceding changes in neural activity? Project 1 first used fMRI MVPA to determine, largely as a proof-of-concept, whether or not specific object-directed hand actions (grasps and reaches) could be predicted from intention-related brain activity patterns. Next, Project 2 examined whether effector-specific (arm vs. eye) movement plans along with their intended directions (left vs. right) could also be decoded prior to movement. Lastly, Project 3 examined exactly where in the human brain higher-level movement goals were represented independently from how those goals were to be implemented. To this aim, Project 3 had subjects either grasp or reach toward an object (two different motor goals) using either their hand or a novel tool (with kinematics opposite to those of the hand). In this way, the goal of the action (grasping vs. reaching) could be maintained across actions, but the way in which those actions were kinematically achieved changed in accordance with the effector (hand or tool). All three projects employed a similar event-related delayed-movement fMRI paradigm that separated in time planning and execution neural responses, allowing us to isolate the preparatory patterns of brain activity that form prior to movement. Project 1 found that the plan-related activity patterns in several parieto-frontal brain regions were predictive of different upcoming hand movements (grasps vs. reaches). Moreover, we found that several parieto-frontal brain regions, similar to that only previously demonstrated in non-human primates, could actually be characterized according to the types of movements they can decode. Project 2 found a variety of functional subdivisions: some parieto-frontal areas discriminated movement plans for the different reach directions, some for the different eye movement directions, and a few areas accurately predicted upcoming directional movements for both the hand and eye. This latter finding demonstrates -- similar to that shown previously in non-human primates -- that some brain areas code for the end motor goal (i.e., target location) independent of effector used. Project 3 identified regions that decoded upcoming hand actions only, upcoming tool actions only, and rather interestingly, areas that predicted actions with both effectors (hand and tool). Notably, some of these latter areas were found to represent the higher-level goals of the movement (grasping vs. reaching) instead of the specific lower-level kinematics (hand vs. tool) necessary to implement those goals. Taken together, these findings offer substantial new insights into the types of intention-related signals contained in human brain activity patterns and specify a hierarchical neural architecture spanning parieto-frontal cortex that guides the construction of complex object-directed behaviors

    The neuroscience of vision-based grasping: a functional review for computational modeling and bio-inspired robotics

    Get PDF
    The topic of vision-based grasping is being widely studied using various techniques and with different goals in humans and in other primates. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved in them is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic application

    Action Intention Modulates the Activity Pattern in Early Visual Areas

    Get PDF
    The activity pattern in the early visual cortex (EVC) can be used to predict upcoming actions as it is functionally connected to higher-order motor areas. However, the mechanism by which the EVC enhances action-relevant features is unclear. We explored this using fMRI. Participants performed Align or Open Hand movements to two oriented objects. We localized the calcarine sulcus, corresponding to the periphery, and the occipital pole, corresponding to the fovea. During planning, univariate analysis did not reveal significant results so we used multi-voxel pattern analysis (MVPA) to decode action type and object orientation. Though objects were located in the periphery, we found a significant decoding accuracy for orientation in an action-dependent manner in the occipital pole and action network areas. We established the functional connectivity between the EVC and somatomotor areas during planning using psychophysiological interaction (PPI) analysis. Taken together, our results show object orientation is modulated by action preparation

    Design of a cybernetic hand for perception and action

    Get PDF
    Strong motivation for developing new prosthetic hand devices is provided by the fact that low functionality and controllability—in addition to poor cosmetic appearance—are the most important reasons why amputees do not regularly use their prosthetic hands. This paper presents the design of the CyberHand, a cybernetic anthropomorphic hand intended to provide amputees with functional hand replacement. Its design was bio-inspired in terms of its modular architecture, its physical appearance, kinematics, sensorization, and actuation, and its multilevel control system. Its underactuated mechanisms allow separate control of each digit as well as thumb–finger opposition and, accordingly, can generate a multitude of grasps. Its sensory system was designed to provide proprioceptive information as well as to emulate fundamental functional properties of human tactile mechanoreceptors of specific importance for grasp-and-hold tasks. The CyberHand control system presumes just a few efferent and afferent channels and was divided in two main layers: a high-level control that interprets the user’s intention (grasp selection and required force level) and can provide pertinent sensory feedback and a low-level control responsible for actuating specific grasps and applying the desired total force by taking advantage of the intelligent mechanics. The grasps made available by the high-level controller include those fundamental for activities of daily living: cylindrical, spherical, tridigital (tripod), and lateral grasps. The modular and flexible design of the CyberHand makes it suitable for incremental development of sensorization, interfacing, and control strategies and, as such, it will be a useful tool not only for clinical research but also for addressing neuroscientific hypotheses regarding sensorimotor control

    Independent development of the reach and the grasp in spontaneous self-touching by human infants in the first 6 months

    Get PDF
    Sherpa Romeo green journal: open accessThe Dual Visuomotor Channel Theory proposes that visually guided reaching is a composite of movements, a Reach that advances the hand to contact the target and a Grasp that shapes the digits for target purchase. The theory is supported by biometric analyses of adult reaching, evolutionary contrasts, and differential developmental patterns for the Reach and the Grasp in visually guided reaching in human infants. The present ethological study asked whether there is evidence for a dissociated development for the Reach and the Grasp in nonvisual hand use in very early infancy. The study documents a rich array of spontaneous self-touching behavior in infants during the first 6 months of life and subjected the Reach movements to an analysis in relation to body target, contact type, and Grasp. Video recordings were made of resting alert infants biweekly from birth to 6 months. In younger infants,self-touching targets included the head and trunk. As infants aged, targets became more caudal and included the hips, then legs, and eventually the feet. In younger infants hand contact was mainly made with the dorsum of the hand, but as infants aged, contacts included palmar contacts and eventually grasp and manipulation contacts with the body and clothes. The relative incidence of caudal contacts and palmar contacts increased concurrently and were significantly correlated throughout the period of study. Developmental increases in self-grasping contacts occurred a few weeks after the increase in caudal and palmar contacts. The behavioral and temporal pattern of these spontaneous self-touching movements suggest that the Reach, in which the hand extends to make a palmar self-contact, and the Grasp, in which the digits close and make manipulatory movements, have partially independent developmental profiles. The results additionally suggest that self-touching behavior is an important developmental phase that allows the coordination of the Reach and the Grasp prior to and concurrent with their use under visual guidance.Ye
    corecore