2,716 research outputs found

    A neural model of hand grip formation during reach to grasp

    Get PDF
    In this paper, we investigate the spatio temporal dynamics of hand pre-shaping during prehension through a biologically plausible neural network model. It is proposed that the hand grip formation in prehension con be understood in terms of basic motor programs that can be resealed both spatially and temporally to accommodate different task demands. The model assumes a timing coordinative role to propioceptive reafferent information generated by the reaching component of the movement, moidiiig the need of a pre-organized firnctional temporal structure for the timing of prehension as some previous models have proposed. Predictions of the model in both Normal and Altered initial hand aperture conditions match key kinematic features present in human data. The differences between the proposed model and previous models predictions are used to tiy to identifi the majorprinciples underlying prehensile behavior

    A new view on grasping

    Get PDF
    Reaching out for an object is often described as consisting of two components that are based on different visual information. Information about the object’s position and orientation guides the hand to the object, while information about the object’s shape and size determines how the fingers move relative to the thumb to grasp it. We propose an alternative description, which consists of determining suitable positions on the object — on the basis of its shape, surface roughness, and so on — and then moving one’s thumb and fingers more or less independently to these positions. We modelled this description using a minimum jerk approach, whereby the finger and thumb approach their respective target positions approximately orthogonally to the surface. Our model predicts how experimental variables such as object size, movement speed, fragility, and required accuracy will influence the timing and size of the maximum aperture of the hand. An extensive review of experimental studies on grasping showed that the predicted influences correspond to human behaviour

    On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation

    Full text link
    Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas

    The control of the reach-to-grasp movement

    Get PDF

    Grasping Kinematics from the Perspective of the Individual Digits: A Modelling Study

    Get PDF
    Grasping is a prototype of human motor coordination. Nevertheless, it is not known what determines the typical movement patterns of grasping. One way to approach this issue is by building models. We developed a model based on the movements of the individual digits. In our model the following objectives were taken into account for each digit: move smoothly to the preselected goal position on the object without hitting other surfaces, arrive at about the same time as the other digit and never move too far from the other digit. These objectives were implemented by regarding the tips of the digits as point masses with a spring between them, each attracted to its goal position and repelled from objects' surfaces. Their movements were damped. Using a single set of parameters, our model can reproduce a wider variety of experimental findings than any previous model of grasping. Apart from reproducing known effects (even the angles under which digits approach trapezoidal objects' surfaces, which no other model can explain), our model predicted that the increase in maximum grip aperture with object size should be greater for blocks than for cylinders. A survey of the literature shows that this is indeed how humans behave. The model can also adequately predict how single digit pointing movements are made. This supports the idea that grasping kinematics follow from the movements of the individual digits

    Human and robot arm control using the minimum variance principle

    Get PDF
    Many computational models of human upper limb movement successfully capture some features of human movement, but often lack a compelling biological basis. One that provides such a basis is Harris and Wolpert’s minimum variance model. In this model, the variance of the hand at the end of a movement is minimised, given that the controlling signal is subject to random noise with zero mean and standard deviation proportional to the signal’s amplitude. This criterion offers a consistent explanation for several movement characteristics. This work formulates the minimum variance model into a form suitable for controlling a robot arm. This implementation allows examination of the model properties, specifically its applicability to producing human-like movement. The model is subsequently tested in areas important to studies of human movement and robotics, including reaching, grasping, and action perception. For reaching, experiments show this formulation successfully captures the characteristics of movement, supporting previous results. Reaching is initially performed between two points, but complex trajectories are also investigated through the inclusion of via- points. The addition of a gripper extends the model, allowing production of trajectories for grasping an object. Using the minimum variance principle to derive digit trajectories, a quantitative explanation for the approach of digits to the object surface is provided. These trajectories also exhibit human-like spatial and temporal coordination between hand transport and grip aperture. The model’s predictive ability is further tested in the perception of human demonstrated actions. Through integration with a system that performs perception using its motor system offline, in line with the motor theory of perception, the model is shown to correlate well with data on human perception of movement. These experiments investigate and extend the explanatory and predictive use of the model for human movement, and demonstrate that it can be suitably formulated to produce human-like movement on robot arms.Open acces

    The target as an obstacle:Grasping an object at different heights

    Get PDF
    Humans use a stereotypical movement pattern to grasp a target object. What is the cause of this stereotypical pattern? One of the possible factors is that the target object is considered an obstacle at positions other than the envisioned goal positions for the digits: while each digit aims for a goal position on the target object, they avoid other positions on the target object even if these positions do not obstruct the movement. According to this hypothesis, the maximum grip aperture will be higher if the risk of colliding with the target object is larger. Based on this hypothesis, we made a set of two unique predictions for grasping a vertically oriented cuboid at its sides at different heights. For cuboids of the same height, the maximum grip aperture will be smaller when grasped higher. For cuboids whose height varies with grip height, the maximum grip aperture will be larger when grasped higher. Both predicted relations were experimentally confirmed. This result supports the idea that considering the target object as an obstacle at positions other than the envisioned goal positions for the digits is underlying the stereotypical movement patterns in grasping. The goal positions of the digits thus influence the maximum grip aperture even if the distance between the goal positions on the target object does not change

    The effects of visual control and distance in modulating peripersonal spatial representation

    Get PDF
    In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets

    Annotated Bibliography: Anticipation

    Get PDF

    Hand posture prediction using neural networks within a biomechanical model

    Get PDF
    This paper proposes the use of artificial neural networks (ANNs) in the framework of a biomechanical hand model for grasping. ANNs enhance the model capabilities as they substitute estimated data for the experimental inputs required by the grasping algorithm used. These inputs are the tentative grasping posture and the most open posture during grasping. As a consequence, more realistic grasping postures are predicted by the grasping algorithm, along with the contact information required by the dynamic biomechanical model (contact points and normals). Several neural network architectures are tested and compared in terms of prediction errors, leading to encouraging results. The performance of the overall proposal is also shown through simulation, where a grasping experiment is replicated and compared to the real grasping data collected by a data glove device. 
    • …
    corecore