684 research outputs found

    Interior maps in posterior pareital cortex

    Get PDF
    The posterior parietal cortex (PPC), historically believed to be a sensory structure, is now viewed as an area important for sensory-motor integration. Among its functions is the forming of intentions, that is, high-level cognitive plans for movement. There is a map of intentions within the PPC, with different subregions dedicated to the planning of eye movements, reaching movements, and grasping movements. These areas appear to be specialized for the multisensory integration and coordinate transformations required to convert sensory input to motor output. In several subregions of the PPC, these operations are facilitated by the use of a common distributed space representation that is independent of both sensory input and motor output. Attention and learning effects are also evident in the PPC. However, these effects may be general to cortex and operate in the PPC in the context of sensory-motor transformations

    Plastic Representation of the Reachable Space for a Humanoid Robot

    Get PDF
    Reaching a target object requires accurate estimation of the object spatial position and its further transformation into a suitable arm-motor command. In this paper, we propose a framework that provides a robot with a capacity to represent its reachable space in an adaptive way. The location of the target is represented implicitly by both the gaze direction and the angles of arm joints. Two paired neural networks are used to compute the direct and inverse transformations between the arm position and the head position. These networks allow reaching the target either through a ballistic movement or through visually-guided actions. Thanks to the latter skill, the robot can adapt its sensorimotor transformations so as to reflect changes in its body configuration. The proposed framework was implemented on the NAO humanoid robot, and our experimental results provide evidences for its adaptative capabilities

    Neuropsychological and behavioral studies on object grasping in humans with and without vision

    Get PDF
    Sensorimotor transformations are used to translate sensory information on intrinsic properties of objects (i.e., size, shape, orientation) onto motor commands for appropriate hand-object interaction. Hence, the direct result of sensorimotor transformation for reach-to-grasp action is hand kinematics (hand shaping) fitting with the object size. We assembled and evaluated a sensor-based glove to measure finger flexion during reaching of differently sized cylinders. Once ensured of the good functioning of the tool, we adopt the glove in two studies dealing with grasping with and without vision. The first study aimed to causally draw a functional map of PMC for visually-based grasping. Specifically, online TMS was applied over a grid covering the whole precentral gyrus while subjects grasped three differently sized cylinders. Output from our sensor glove was analyzed with a hypothesis-independent approach using classification algorithms. Results from classifiers convincingly suggested a multifocal representation of visually-based grasping in human PMC involving the ventral PMC and, for the first time in human, the supplementary motor area. The second study aimed to establish whether the gaze direction modulated hand shaping during haptically-based reaching as it does during visually-based reaching. Participants haptically explored and then grasped an object of three possible sizes aligned with body midline while looking in the direction of the object or laterally to it. Results showed that gaze direction asymmetrically affected finger flexion during haptically-based reaching. Despite this asymmetrical effect, the investigation provided evidence for retinotopic coding of haptically-explored objects

    Influence of Gaze Position on Grasp Parameters For Reaches to Visible and Remembered Stimuli

    Get PDF
    In order to pick up or manipulate a seen object, one must use visual signals to aim and transport the hand to the object’s location (reach), and configure the digits to the shape of the object (grasp). It has been shown that reach and grasp are controlled by separate neural pathways. In real world conditions, however, all of these signals (gaze, reach, grasp) must interact to provide accurate eye-hand coordination. The interactions between gaze, reach, and grasp parameters have not been comprehensively studied in humans. The purpose of the study was to investigate 1) the effect of gaze and target positions on grasp location, amplitude, and orientation, and 2) the influence of visual feedback of the hand and target on the final grasp components and on the spatial deviations associated with gaze direction and target position. Seven subjects reached to grasp a rectangular “virtual” target presented at three orientations, three locations, and with three gaze fixation positions during open- and closed-loop conditions. Participants showed gaze- and target-dependent deviations in grasp parameters that could not be predicted from previous studies. Our results showed that both reach- and grasp-related deviations were affected by stimulus position. The interaction effects of gaze and reach position revealed complex mechanisms, and their impacts were different in each grasp parameter. The impacts of gaze direction on grasp deviation were dependent on target position in space, especially for grasp location and amplitude. Gaze direction had little impact on grasp orientation. Visual feedback about the hand and target modulated the reach- and gaze- related impacts. The results suggest that the brain uses both control signal interactions and sensorimotor strategies to control and plan reach-and-grasp movements

    The role of the posterior parietal cortex in cognitive-motor integration

    Get PDF
    "When interacting with an object within the environment, one must combine visual information with the felt limb position (i.e. proprioception) in order compute an appropriate coordinated muscle plan for accurate motor control. Amongst the vast reciprocally connected parieto-frontal connections responsible for guiding a limb throughout space, the posterior parietal cortex (PPC) remains a front-runner as a crucial node within this network. Our brain is primed to reach directly towards a viewed object, a situation that has been termed ""standard"". Such direct eye-hand coordination is common across species and is crucial for basic survival. Humans, however, have developed the capacity for tool-use and thus have learned to interact indirectly with an object. In such ""non-standard"" situations, the directions of gaze and arm movement are spatially decoupled and rely on both the implementation of a cognitive rule and online feedback of the decoupled limb. The studies included within this dissertation were designed to further characterize the role of the PPC in different types of visually-guided reaching which require one to think and to act simultaneously (i.e. cognitive-motor integration). To address the relative contribution of different cortical networks responsible for cognitive-motor integration, we tested three patients with optic ataxia (OA; two unilateral - first study, and one bilateral -second study) as well as healthy participants during a cognitively-demanding dual task (third study) on a series of visually-guided reaching tasks each requiring a relative weighting between explicit cognitive control and implicit online control of the spatially decoupled limb. We found that the eye and hand movement performance during decoupled reaching was the most compromised in OA during situations relying on sensorimotor recalibration, and the most compromised in healthy participants during a dual task relying on strategic control. Taken together, these data presented in this dissertation provide further evidence for the existence of alternate task-dependent neural pathways for cognitive-motor integration.

    Visuomotor Coordination in Reach-To-Grasp Tasks: From Humans to Humanoids and Vice Versa

    Get PDF
    Understanding the principles involved in visually-based coordinated motor control is one of the most fundamental and most intriguing research problems across a number of areas, including psychology, neuroscience, computer vision and robotics. Not very much is known regarding computational functions that the central nervous system performs in order to provide a set of requirements for visually-driven reaching and grasping. Additionally, in spite of several decades of advances in the field, the abilities of humanoids to perform similar tasks are by far modest when needed to operate in unstructured and dynamically changing environments. More specifically, our first focus is understanding the principles involved in human visuomotor coordination. Not many behavioral studies considered visuomotor coordination in natural, unrestricted, head-free movements in complex scenarios such as obstacle avoidance. To fill this gap, we provide an assessment of visuomotor coordination when humans perform prehensile tasks with obstacle avoidance, an issue that has received far less attention. Namely, we quantify the relationships between the gaze and arm-hand systems, so as to inform robotic models, and we investigate how the presence of an obstacle modulates this pattern of correlations. Second, to complement these observations, we provide a robotic model of visuomotor coordination, with and without the presence of obstacles in the workspace. The parameters of the controller are solely estimated by using the human motion capture data from our human study. This controller has a number of interesting properties. It provides an efficient way to control the gaze, arm and hand movements in a stable and coordinated manner. When facing perturbations while reaching and grasping, our controller adapts its behavior almost instantly, while preserving coordination between the gaze, arm, and hand. In the third part of the thesis, we study the neuroscientific literature of the primates. We here stress the view that the cerebellum uses the cortical reference frame representation. The cerebellum by taking into account this representation performs closed-loop programming of multi-joint movements and movement synchronization between the eye-head system, arm and hand. Based on this investigation, we propose a functional architecture of the cerebellar-cortical involvement. We derive a number of improvements of our visuomotor controller for obstacle-free reaching and grasping. Because this model is devised by carefully taking into account the neuroscientific evidence, we are able to provide a number of testable predictions about the functions of the central nervous system in visuomotor coordination. Finally, we tackle the flow of the visuomotor coordination in the direction from the arm-hand system to the visual system. We develop two models of motor-primed attention for humanoid robots. Motor-priming of attention is a mechanism that implements prioritizing of visual processing with respect to motor-relevant parts of the visual field. Recent studies in humans and monkeys have shown that visual attention supporting natural behavior is not exclusively defined in terms of visual saliency in color or texture cues, rather the reachable space and motor plans present the predominant source of this attentional modulation. Here, we show that motor-priming of visual attention can be used to efficiently distribute robot's computational resources devoted to visual processing

    The Role of the Caudal Superior Parietal Lobule in Updating Hand Location in Peripheral Vision: Further Evidence from Optic Ataxia

    Get PDF
    Patients with optic ataxia (OA), who are missing the caudal portion of their superior parietal lobule (SPL), have difficulty performing visually-guided reaches towards extra-foveal targets. Such gaze and hand decoupling also occurs in commonly performed non-standard visuomotor transformations such as the use of a computer mouse. In this study, we test two unilateral OA patients in conditions of 1) a change in the physical location of the visual stimulus relative to the plane of the limb movement, 2) a cue that signals a required limb movement 180° opposite to the cued visual target location, or 3) both of these situations combined. In these non-standard visuomotor transformations, the OA deficit is not observed as the well-documented field-dependent misreach. Instead, OA patients make additional eye movements to update hand and goal location during motor execution in order to complete these slow movements. Overall, the OA patients struggled when having to guide centrifugal movements in peripheral vision, even when they were instructed from visual stimuli that could be foveated. We propose that an intact caudal SPL is crucial for any visuomotor control that involves updating ongoing hand location in space without foveating it, i.e. from peripheral vision, proprioceptive or predictive information

    A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

    Get PDF
    Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors
    corecore