1,398 research outputs found

    Modeling and Learning of Complex Motor Tasks: A Case Study with Robot Table Tennis

    Get PDF
    Most tasks that humans need to accomplished in their everyday life require certain motor skills. Although most motor skills seem to rely on the same elementary movements, humans are able to accomplish many different tasks. Robots, on the other hand, are still limited to a small number of skills and depend on well-defined environments. Modeling new motor behaviors is therefore an important research area in robotics. Computational models of human motor control are an essential step to construct robotic systems that are able to solve complex tasks in a human inhabited environment. These models can be the key for robust, efficient, and human-like movement plans. In turn, the reproduction of human-like behavior on a robotic system can be also beneficial for computational neuroscientists to verify their hypotheses. Although biomimetic models can be of great help in order to close the gap between human and robot motor abilities, these models are usually limited to the scenarios considered. However, one important property of human motor behavior is the ability to adapt skills to new situations and to learn new motor skills with relatively few trials. Domain-appropriate machine learning techniques, such as supervised and reinforcement learning, have a great potential to enable robotic systems to autonomously learn motor skills. In this thesis, we attempt to model and subsequently learn a complex motor task. As a test case for a complex motor task, we chose robot table tennis throughout this thesis. Table tennis requires a series of time critical movements which have to be selected and adapted according to environmental stimuli as well as the desired targets. We first analyze how humans play table tennis and create a computational model that results in human-like hitting motions on a robot arm. Our focus lies on generating motor behavior capable of adapting to variations and uncertainties in the environmental conditions. We evaluate the resulting biomimetic model both in a physically realistic simulation and on a real anthropomorphic seven degrees of freedom Barrett WAM robot arm. This biomimetic model based purely on analytical methods produces successful hitting motions, but does not feature the flexibility found in human motor behavior. We therefore suggest a new framework that allows a robot to learn cooperative table tennis from and with a human. Here, the robot first learns a set of elementary hitting movements from a human teacher by kinesthetic teach-in, which is compiled into a set of motor primitives. To generalize these movements to a wider range of situations we introduce the mixture of motor primitives algorithm. The resulting motor policy enables the robot to select appropriate motor primitives as well as to generalize between them. Furthermore, it also allows to adapt the selection process of the hitting movements based on the outcome of previous trials. The framework is evaluated both in simulation and on a real Barrett WAM robot. In consecutive experiments, we show that our approach allows the robot to return balls from a ball launcher and furthermore to play table tennis with a human partner. Executing robot movements using a biomimetic or learned approach enables the robot to return balls successfully. However, in motor tasks with a competitive goal such as table tennis, the robot not only needs to return the balls successfully in order to accomplish the task, it also needs an adaptive strategy. Such a higher-level strategy cannot be programed manually as it depends on the opponent and the abilities of the robot. We therefore make a first step towards the goal of acquiring such a strategy and investigate the possibility of inferring strategic information from observing humans playing table tennis. We model table tennis as a Markov decision problem, where the reward function captures the goal of the task as well as knowledge on effective elements of a basic strategy. We show how this reward function, and therefore the strategic information can be discovered with model-free inverse reinforcement learning from human table tennis matches. The approach is evaluated on data collected from players with different playing styles and skill levels. We show that the resulting reward functions are able to capture expert-specific strategic information that allow to distinguish the expert among players with different playing skills as well as different playing styles. To summarize, in this thesis, we have derived a computational model for table tennis that was successfully implemented on a Barrett WAM robot arm and that has proven to produce human-like hitting motions. We also introduced a framework for learning a complex motor task based on a library of demonstrated hitting primitives. To select and generalize these hitting movements we developed the mixture of motor primitives algorithm where the selection process can be adapted online based on the success of the synthesized hitting movements. The setup was tested on a real robot, which showed that the resulting robot table tennis player is able to play a cooperative game against an human opponent. Finally, we could show that it is possible to infer basic strategic information in table tennis from observing matches of human players using model-free inverse reinforcement learning

    Deep Predictive Policy Training using Reinforcement Learning

    Full text link
    Skilled robot task learning is best implemented by predictive action policies due to the inherent latency of sensorimotor processes. However, training such predictive policies is challenging as it involves finding a trajectory of motor activations for the full duration of the action. We propose a data-efficient deep predictive policy training (DPPT) framework with a deep neural network policy architecture which maps an image observation to a sequence of motor activations. The architecture consists of three sub-networks referred to as the perception, policy and behavior super-layers. The perception and behavior super-layers force an abstraction of visual and motor data trained with synthetic and simulated training samples, respectively. The policy super-layer is a small sub-network with fewer parameters that maps data in-between the abstracted manifolds. It is trained for each task using methods for policy search reinforcement learning. We demonstrate the suitability of the proposed architecture and learning framework by training predictive policies for skilled object grasping and ball throwing on a PR2 robot. The effectiveness of the method is illustrated by the fact that these tasks are trained using only about 180 real robot attempts with qualitative terminal rewards.Comment: This work is submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems 2017 (IROS2017

    Multi-Task Policy Search for Robotics

    No full text
    © 2014 IEEE.Learning policies that generalize across multiple tasks is an important and challenging research topic in reinforcement learning and robotics. Training individual policies for every single potential task is often impractical, especially for continuous task variations, requiring more principled approaches to share and transfer knowledge among similar tasks. We present a novel approach for learning a nonlinear feedback policy that generalizes across multiple tasks. The key idea is to define a parametrized policy as a function of both the state and the task, which allows learning a single policy that generalizes across multiple known and unknown tasks. Applications of our novel approach to reinforcement and imitation learning in realrobot experiments are shown

    Probabilistic movement modeling for intention inference in human-robot interaction.

    No full text
    Intention inference can be an essential step toward efficient humanrobot interaction. For this purpose, we propose the Intention-Driven Dynamics Model (IDDM) to probabilistically model the generative process of movements that are directed by the intention. The IDDM allows to infer the intention from observed movements using Bayes ’ theorem. The IDDM simultaneously finds a latent state representation of noisy and highdimensional observations, and models the intention-driven dynamics in the latent states. As most robotics applications are subject to real-time constraints, we develop an efficient online algorithm that allows for real-time intention inference. Two human-robot interaction scenarios, i.e., target prediction for robot table tennis and action recognition for interactive humanoid robots, are used to evaluate the performance of our inference algorithm. In both intention inference tasks, the proposed algorithm achieves substantial improvements over support vector machines and Gaussian processes.

    A Framework of Hybrid Force/Motion Skills Learning for Robots

    Get PDF
    Human factors and human-centred design philosophy are highly desired in today’s robotics applications such as human-robot interaction (HRI). Several studies showed that endowing robots of human-like interaction skills can not only make them more likeable but also improve their performance. In particular, skill transfer by imitation learning can increase usability and acceptability of robots by the users without computer programming skills. In fact, besides positional information, muscle stiffness of the human arm, contact force with the environment also play important roles in understanding and generating human-like manipulation behaviours for robots, e.g., in physical HRI and tele-operation. To this end, we present a novel robot learning framework based on Dynamic Movement Primitives (DMPs), taking into consideration both the positional and the contact force profiles for human-robot skills transferring. Distinguished from the conventional method involving only the motion information, the proposed framework combines two sets of DMPs, which are built to model the motion trajectory and the force variation of the robot manipulator, respectively. Thus, a hybrid force/motion control approach is taken to ensure the accurate tracking and reproduction of the desired positional and force motor skills. Meanwhile, in order to simplify the control system, a momentum-based force observer is applied to estimate the contact force instead of employing force sensors. To deploy the learned motion-force robot manipulation skills to a broader variety of tasks, the generalization of these DMP models in actual situations is also considered. Comparative experiments have been conducted using a Baxter Robot to verify the effectiveness of the proposed learning framework on real-world scenarios like cleaning a table

    Extracting low-dimensional control variables for movement primitives

    Get PDF
    Movement primitives (MPs) provide a powerful framework for data driven movement generation that has been successfully applied for learning from demonstrations and robot reinforcement learning. In robotics we often want to solve a multitude of different, but related tasks. As the parameters of the primitives are typically high dimensional, a common practice for the generalization of movement primitives to new tasks is to adapt only a small set of control variables, also called meta parameters, of the primitive. Yet, for most MP representations, the encoding of these control variables is pre-coded in the representation and can not be adapted to the considered tasks. In this paper, we want to learn the encoding of task-specific control variables also from data instead of relying on fixed meta-parameter representations. We use hierarchical Bayesian models (HBMs) to estimate a low dimensional latent variable model for probabilistic movement primitives (ProMPs), which is a recent movement primitive representation. We show on two real robot datasets that ProMPs based on HBMs outperform standard ProMPs in terms of generalization and learning from a small amount of data and also allows for an intuitive analysis of the movement. We also extend our HBM by a mixture model, such that we can model different movement types in the same dataset
    corecore