89 research outputs found

    The causal role of three frontal cortical areas in grasping

    Get PDF
    Efficient object grasping requires the continuous control of arm and hand movements based on visual information. Previous studies have identified a network of parietal and frontal areas that is crucial for the visual control of prehension movements. Electrical microstimulation of 3D shape-selective clusters in AIP during fMRI activates areas F5a and 45B, suggesting that these frontal areas may represent important downstream areas for object processing during grasping, but the role of area F5a and 45B in grasping is unknown. To assess their causal role in the frontal grasping network, we reversibly inactivated 45B, F5a and F5p during visually-guided grasping in macaque monkeys. First, we recorded single neuron activity in 45B, F5a and F5p to identify sites with object responses during grasping. Then, we injected muscimol or saline to measure the grasping deficit induced by the temporary disruption of each of these three nodes in the grasping network. The inactivation of all three areas resulted in a significant increase in the grasping time in both animals, with the strongest effect observed in area F5p. These results not only confirm a clear involvement of F5p, but also indicate causal contributions of area F5a and 45B in visually-guided object grasping

    The Causal Role of Three Frontal Cortical Areas in Grasping

    Get PDF
    Efficient object grasping requires the continuous control of arm and hand movements based on visual information. Previous studies have identified a network of parietal and frontal areas that is crucial for the visual control of prehension movements. Electrical microstimulation of 3D shape-selective clusters in AIP during functional magnetic resonance imaging activates areas F5a and 45B, suggesting that these frontal areas may represent important downstream areas for object processing during grasping, but the role of area F5a and 45B in grasping is unknown. To assess their causal role in the frontal grasping network, we reversibly inactivated 45B, F5a, and F5p during visually guided grasping in macaque monkeys. First, we recorded single neuron activity in 45B, F5a, and F5p to identify sites with object responses during grasping. Then, we injected muscimol or saline to measure the grasping deficit induced by the temporary disruption of each of these three nodes in the grasping network. The inactivation of all three areas resulted in a significant increase in the grasping time in both animals, with the strongest effect observed in area F5p. These results not only confirm a clear involvement of F5p, but also indicate causal contributions of area F5a and 45B in visually guided object grasping

    Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping

    Get PDF
    The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move an object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, moving an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasps reliable is more complex than for reaches, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure

    Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands

    Get PDF
    Grasp affordances in robotics represent different ways to grasp an object involving a variety of factors from vision to hand control. A model of grasp affordances that is able to scale across different objects, features and domains is needed to provide robots with advanced manipulation skills. The existing frameworks, however, can be difficult to extend towards a more general and domain independent approach. This work is the first step towards a modular implementation of grasp affordances that can be separated into two stages: approach to grasp and grasp execution. In this study, human experiments of approaching to grasp are analysed, and object-independent patterns of motion are defined and modelled analytically from the data. Human subjects performed a specific action (hammering) using objects of different geometry, size and weight. Motion capture data relating the hand-object approach distance was used for the analysis. The results showed that approach to grasp can be structured in four distinct phases that are best represented by non-linear models, independent from the objects being handled. This suggests that approaching to grasp patterns are following an intentionally planned control strategy, rather than implementing a reactive execution

    Grasping and Assembling with Modular Robots

    Get PDF
    A wide variety of problems, from manufacturing to disaster response and space exploration, can benefit from robotic systems that can firmly grasp objects or assemble various structures, particularly in difficult, dangerous environments. In this thesis, we study the two problems, robotic grasping and assembly, with a modular robotic approach that can facilitate the problems with versatility and robustness. First, this thesis develops a theoretical framework for grasping objects with customized effectors that have curved contact surfaces, with applications to modular robots. We present a collection of grasps and cages that can effectively restrain the mobility of a wide range of objects including polyhedra. Each of the grasps or cages is formed by at most three effectors. A stable grasp is obtained by simple motion planning and control. Based on the theory, we create a robotic system comprised of a modular manipulator equipped with customized end-effectors and a software suite for planning and control of the manipulator. Second, this thesis presents efficient assembly planning algorithms for constructing planar target structures collectively with a collection of homogeneous mobile modular robots. The algorithms are provably correct and address arbitrary target structures that may include internal holes. The resultant assembly plan supports parallel assembly and guarantees easy accessibility in the sense that a robot does not have to pass through a narrow gap while approaching its target position. Finally, we extend the algorithms to address various symmetric patterns formed by a collection of congruent rectangles on the plane. The basic ideas in this thesis have broad applications to manufacturing (restraint), humanitarian missions (forming airfields on the high seas), and service robotics (grasping and manipulation)

    Autonomy Infused Teleoperation with Application to BCI Manipulation

    Full text link
    Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator's capabilities and feelings of comfort and control while compensating for a task's difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments

    A Continuous Grasp Representation for the Imitation Learning of Grasps on Humanoid Robots

    Get PDF
    Models and methods are presented which enable a humanoid robot to learn reusable, adaptive grasping skills. Mechanisms and principles in human grasp behavior are studied. The findings are used to develop a grasp representation capable of retaining specific motion characteristics and of adapting to different objects and tasks. Based on the representation a framework is proposed which enables the robot to observe human grasping, learn grasp representations, and infer executable grasping actions

    “Left and right prefrontal routes to action comprehension”

    Get PDF
    Published online 20 March 2023Successful action comprehension requires the integration of motor information and semantic cues about objects in context. Previous evidence suggests that while motor features are dorsally encoded in the fronto-parietal action observation network (AON); semantic features are ventrally processed in temporal structures. Importantly, these dorsal and ventral routes seem to be preferentially tuned to low (LSF) and high (HSF) spatial frequencies, respectively. Recently, we proposed a model of action comprehension where we hypothesized an additional route to action understanding whereby coarse LSF information about objects in context is projected to the dorsal AON via the prefrontal cortex (PFC), providing a prediction signal of the most likely intention afforded by them. Yet, this model awaits for experimental testing. To this end, we used a perturb-and-measure continuous theta burst stimulation (cTBS) approach, selectively disrupting neural activity in the left and right PFC and then evaluating the participant's ability to recognize filtered action stimuli containing only HSF or LSF. We find that stimulation over PFC triggered different spatial-frequency modulations depending on lateralization: left-cTBS and right-cTBS led to poorer performance on HSF and LSF action stimuli, respectively. Our findings suggest that left and right PFC exploit distinct spatial frequencies to support action comprehension, providing evidence for multiple routes to social perception in humans.This work was supported by grants from the European Commission (MCSA-H2020-NBUCA; Grant 656881) to L.A., from the Italian Ministry of University and Research (PRIN 2017; Protocol 2017N7WCLP) to C.U., and from the Italian Ministry of Health (Ricerca Corrente 2022, Scientific Institute, IRCCS E. Medea) to A.F

    “Left and right prefrontal routes to action comprehension”

    Get PDF
    Successful action comprehension requires the integration of motor information and semantic cues about objects in context. Previous evidence suggests that while motor features are dorsally encoded in the fronto-parietal action observation network (AON); semantic features are ventrally processed in temporal structures. Importantly, these dorsal and ventral routes seem to be preferentially tuned to low (LSF) and high (HSF) spatial frequencies, respectively. Recently, we proposed a model of action comprehension where we hypothesized an additional route to action understanding whereby coarse LSF information about objects in context is projected to the dorsal AON via the prefrontal cortex (PFC), providing a prediction signal of the most likely intention afforded by them. Yet, this model awaits for experimental testing. To this end, we used a perturb-and-measure continuous theta burst stimulation (cTBS) approach, selectively disrupting neural activity in the left and right PFC and then evaluating the participant's ability to recognize filtered action stimuli containing only HSF or LSF. We find that stimulation over PFC triggered different spatial-frequency modulations depending on lateralization: left-cTBS and right-cTBS led to poorer performance on HSF and LSF action stimuli, respectively. Our findings suggest that left and right PFC exploit distinct spatial frequencies to support action comprehension, providing evidence for multiple routes to social perception in humans
    • …
    corecore