16 research outputs found

    Coupled dynamical system based arm-hand grasping model for learning fast adaptation strategies

    Get PDF
    Performing manipulation tasks interactively in real environments requires a high degree of accuracy and stability. At the same time, when one cannot assume a fully deterministic and static environment, one must endow the robot with the ability to react rapidly to sudden changes in the environment. These considerations make the task of reach and grasp difficult to deal with. We follow a programming by demonstration (PbD) approach to the problem and take inspiration from the way humans adapt their reach and grasp motions when perturbed. This is in sharp contrast to previous work in PbD that uses unperturbed motions for training the system and then applies perturbation solely during the testing phase. In this work, we record the kinematics of arm and fingers of human subjects during unperturbed and perturbed reach and grasp motions. In the perturbed demonstrations, the target’s location is changed suddenly after the onset of the motion. Data show a strong coupling between the hand transport and finger motions. We hypothesize that this coupling enables the subject to seamlessly and rapidly adapt the finger motion in coordination with the hand posture. To endow our robot with this competence, we develop a Coupled Dynamical System based controller, whereby two dynamical systems driving the hand and finger motions are coupled. This offers a compact encoding for reach-to-grasp motions that ensures fast adaptation with zero latency for re-planning. We show in simulation and on the real iCub robot that this coupling ensures smooth and “human-like” motions. We demonstrate the performance of our model under spatial, temporal and grasp type perturbations which show that reaching the target with coordinated hand-arm motion is necessary for the success of the task

    Learning Coupled Dynamical Systems from human demonstration for robotic eye-arm-hand coordination

    Full text link

    Learning Reach-to-Grasp Motions From Human Demonstrations

    Get PDF
    Reaching over to grasp an item is arguably the most commonly used motor skill by humans. Even under sudden perturbations, humans seem to react rapidly and adapt their motion to guarantee success. Despite the apparent ease and frequency with which we use this ability, a complete understanding of the underlying mechanisms cannot be claimed. It is partly due to such incomplete knowledge that adaptive robot motion for reaching and grasping under perturbations is not perfectly achieved. In this thesis, we take the discriminative approach for modelling trajectories of reach-to-grasp motion from expert demonstrations. Throughout this thesis, we will employ time-independent (autonomous) flow based representations to learn reactive motion controllers which can then be ported onto robots. This thesis is divided into three main parts. The first part is dedicated to biologically inspired modelling of reach-to-grasp motions with respect to the hand-arm coupling. We build upon previous work in motion modelling using autonomous dynamical systems (DS) and present a coupled dynamical system (CDS) model of these two subsystems. The coupled model ensures satisfaction of the constraints between the hand and the arm subsystems which are critical to the success of a reach-to-grasp task. Moreover, it reduces the complexity of the overall motion planning problem as compared to considering a combined problem for the hand and the arm motion. In the second part we extend the CDS approach to incorporate multiple grasping points. Such a model is beneficial due to the fact that many daily life objects afford multiple grasping locations on their surface. We combine a DS based approach with energy-function learning to learn a multiple attractor dynamical system where the attractors are mapped to the desired grasping points. We present the Augmented-SVM (ASVM) model that combines the classical SVM formulation with gradient constraints arising from the energy function to learn the desired dynamical function for motion generation. In the last part of this thesis, we address the problem of inverse-kinematics and obstacle avoidance by combining our flow-based motion generator with global configuration-space planners. We claim that the two techniques complement each other. On one hand, the fast reactive nature of our flow based motion generator can used to guide the search of a randomly exploring random tree (RRT) based global planner. On the other hand, global planners can efficiently handle arbitrary obstacles and avoid local minima present in the dynamical function learned from demonstrations. We show that combining the information from demonstrations with global planning in the form of a energy-map considerably decreases the computational complexity of state-of-the-art sampling based planners. We believe that this thesis has the following contributions to Robotics and Machine Learning. First, we have developed algorithms for fast and adaptive motion generation for reach-grasp motions. Second, we formulated an extension to the classical SVM formulation that takes into account the gradient information from data. We showed that instead of being limited as a classifier or a regressor, the SVM framework can be used as a more general function approximation technique. Lastly, we have combined our local methods with global approaches for planning to achieve arbitrary obstacle avoidance and considerable reduction in the computation complexity of the global planners

    Estimating the non-linear dynamics of free-flying objects

    Get PDF
    This paper develops a model-free method to estimate the dynamics of free-flying objects. We take a realistic perspective to the problem and investigate tracking accurately and very rapidly the trajectory and orientation of an object so as to catch it in flight. We consider the dynamics of complex objects where the grasping point is not located at the center of mass. To achieve this, a density estimate of the translational and rotational velocity is built based on the trajectories of various examples. We contrast the performance of six non-linear regression methods (Support Vector Regression (SVR) with Radial Basis Function (RBF) kernel, SVR with polynomial kernel, Gaussian Mixture Regression (GMR), Echo State Network (ESN), Genetic Programming (GP) and Locally Weighted Projection Regression (LWPR)) in terms of precision of recall, computational cost and sensitivity to choice of hyper-parameters. We validate the approach for real-time motion tracking of 5 daily life objects with complex dynamics (a ball, a fully-filled bottle, a half-filled bottle, a hammer and a pingpong racket). To enable real-time tracking, the estimated model of the object's dynamics is coupled with an Extended Kalman Filter for robustness against noisy sensing. (C) 2012 Elsevier B.V. All rights reserved

    Task Parameterization Using Continuous Constraints Extracted From Human Demonstrations

    Get PDF
    In this work we propose an approach for learning task specifications automatically, by observing human demonstrations. Using this allows a robot to combine representations of individual actions to achieve a high-level goal. We hypothesize that task specifications consist of variables that present a pattern of change that is invariant across demonstrations. We identify these specifications at different stages of task completion. Changes in task constraints allow us to identify transitions in the task description and to segment them into sub-tasks. We extract the following task-space constraints: (1) the reference frame in which to express the task variables, (2) the variable of interest at each time step, position or force at the end effector; and (3) a factor that can modulate the contribution of force and position in a hybrid impedance controller. The approach was validated on a 7 DOF Kuka arm, performing 2 different tasks: grating vegetables and extracting a battery from a charging stand

    Learning Compliant Manipulation through Kinesthetic and Tactile Human-Robot Interaction

    Get PDF
    Robot Learning from Demonstration (RLfD) has been identified as a key element for making robots useful in daily lives. A wide range of techniques has been proposed for deriving a task model from a set of demonstrations of the task. Most previous works use learning to model the kinematics of the task, and for autonomous execution the robot then relies on a stiff position controller. While many tasks can and have been learned this way, there are tasks in which controlling the position alone is insufficient to achieve the goals of the task. These are typically tasks that involve contact or require a specific response to physical perturbations. The question of how to adjust the compliance to suit the need of the task has not yet been fully treated in Robot Learning from Demonstration. In this paper, we address this issue and present interfaces that allow a human teacher to indicate compliance variations by physically interacting with the robot during task execution. We validate our approach in two different experiments on the 7 DoF Barrett WAM and KUKA LWR robot manipulators. Furthermore, we conduct a user study to evaluate the usability of our approach from a non-roboticists perspective
    corecore