1,221 research outputs found
On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation
Biological and robotic grasp and manipulation are undeniably similar at the
level of mechanical task performance. However, their underlying fundamental
biological vs. engineering mechanisms are, by definition, dramatically
different and can even be antithetical. Even our approach to each is
diametrically opposite: inductive science for the study of biological systems
vs. engineering synthesis for the design and construction of robotic systems.
The past 20 years have seen several conceptual advances in both fields and the
quest to unify them. Chief among them is the reluctant recognition that their
underlying fundamental mechanisms may actually share limited common ground,
while exhibiting many fundamental differences. This recognition is particularly
liberating because it allows us to resolve and move beyond multiple paradoxes
and contradictions that arose from the initial reasonable assumption of a large
common ground. Here, we begin by introducing the perspective of neuromechanics,
which emphasizes that real-world behavior emerges from the intimate
interactions among the physical structure of the system, the mechanical
requirements of a task, the feasible neural control actions to produce it, and
the ability of the neuromuscular system to adapt through interactions with the
environment. This allows us to articulate a succinct overview of a few salient
conceptual paradoxes and contradictions regarding under-determined vs.
over-determined mechanics, under- vs. over-actuated control, prescribed vs.
emergent function, learning vs. implementation vs. adaptation, prescriptive vs.
descriptive synergies, and optimal vs. habitual performance. We conclude by
presenting open questions and suggesting directions for future research. We
hope this frank assessment of the state-of-the-art will encourage and guide
these communities to continue to interact and make progress in these important
areas
Dexterous manipulation of unknown objects using virtual contact points
The manipulation of unknown objects is a problem of special interest in robotics since it is not always possible to have exact models of the objects with which the robot interacts. This paper presents a simple strategy to manipulate unknown objects using a robotic hand equipped with tactile sensors. The hand configurations that allow the rotation of an unknown object are computed using only tactile and kinematic information, obtained during the manipulation process and reasoning about the desired and real positions of the fingertips during the manipulation. This is done taking into account that the desired positions of the fingertips are not physically reachable since they are located in the interior of the manipulated object and therefore they are virtual positions with associated virtual contact points. The proposed approach was satisfactorily validated using three fingers of an anthropomorphic robotic hand (Allegro Hand), with the original fingertips replaced by tactile sensors (WTS-FT). In the experimental validation, several everyday objects with different shapes were successfully manipulated, rotating them without the need of knowing their shape or any other physical property.Peer ReviewedPostprint (author's final draft
The Role of Learning and Kinematic Features in Dexterous Manipulation: a Comparative Study with Two Robotic Hands
Dexterous movements performed by the human hand are by far more sophisticated than those achieved by current humanoid robotic hands and systems used to control them. This work aims at providing a contribution in order to overcome this gap by proposing a bio-inspired control architecture that captures two key elements underlying human dexterity. The first is the progressive development of skilful control, often starting from – or involving – cyclic movements, based on trial-and-error learning processes and central pattern generators. The second element is the exploitation of a particular kinematic features of the human hand, i.e. the thumb opposition. The architecture is tested with two simulated robotic hands having different kinematic features and engaged in rotating spheres, cylinders, and cubes of different sizes. The results support the feasibility of the proposed approach and show the potential of the model to allow a better understanding of the control mechanisms and kinematic principles underlying human dexterity and make them transferable to anthropomorphic robotic hands
The role of learning and kinematic features in dexterous manipulation: a comparative study with two robotic hands
Dexterous movements performed by the human hand are by far more sophisticated than those achieved by current humanoid robotic hands and systems used to control them. This work aims at providing a contribution in order to overcome this gap by proposing a bio-inspired control architecture that captures two key elements underlying human dexterity. The first is the progressive development of skilful control, often starting from - or involving - cyclic movements, based on trial-and-error learning processes and central pattern generators. The second element is the exploitation of a particular kinematic features of the human hand, i.e. the thumb opposition. The architecture is tested with two simulated robotic hands having different kinematic features and engaged in rotating spheres, cylinders, and cubes of different sizes. The results support the feasibility of the proposed approach and show the potential of the model to allow a better understanding of the control mechanisms and kinematic principles underlying human dexterity and make them transferable to anthropomorphic robotic hands
Adaptive Motion Planning for Multi-fingered Functional Grasp via Force Feedback
Enabling multi-fingered robots to grasp and manipulate objects with
human-like dexterity is especially challenging during the dynamic, continuous
hand-object interactions. Closed-loop feedback control is essential for
dexterous hands to dynamically finetune hand poses when performing precise
functional grasps. This work proposes an adaptive motion planning method based
on deep reinforcement learning to adjust grasping poses according to real-time
feedback from joint torques from pre-grasp to goal grasp. We find the
multi-joint torques of the dexterous hand can sense object positions through
contacts and collisions, enabling real-time adjustment of grasps to generate
varying grasping trajectories for objects in different positions. In our
experiments, the performance gap with and without force feedback reveals the
important role of force feedback in adaptive manipulation. Our approach
utilizing force feedback preliminarily exhibits human-like flexibility,
adaptability, and precision.Comment: 8 pages,7 figure
Learning by Demonstration and Robust Control of Dexterous In-Hand Robotic Manipulation Skills
Dexterous robotic manipulation of unknown objects can open the way to novel tasks and applications of robots in semi-structured and unstructured settings, from advanced industrial manufacturing to exploration of harsh environments. However, it is challenging for at least three reasons: the desired motion of the object might be too complex to be described analytically, precise models of the manipulated objects are not available, the controller should simultaneously ensure both a robust grasp and an effective in-hand motion. To solve these issues we propose to learn in-hand robotic manipulation tasks from human demonstrations, using Dynamical Movement Primitives (DMPs), and to reproduce them with a robust compliant controller based on the Virtual Springs Framework (VSF), that employs real-time feedback of the contact forces measured on the robot fingertips. With this solution, the generalization capabilities of DMPs can be transferred successfully to the dexterous in-hand manipulation problem: we demonstrate this by presenting real-world experiments of in-hand translation and rotation of unknown objects
- …