520 research outputs found
Probabilistic movement modeling for intention inference in human-robot interaction.
Intention inference can be an essential step toward efficient humanrobot interaction. For this purpose, we propose the Intention-Driven Dynamics Model (IDDM) to probabilistically model the generative process of movements that are directed by the intention. The IDDM allows to infer the intention from observed movements using Bayes ’ theorem. The IDDM simultaneously finds a latent state representation of noisy and highdimensional observations, and models the intention-driven dynamics in the latent states. As most robotics applications are subject to real-time constraints, we develop an efficient online algorithm that allows for real-time intention inference. Two human-robot interaction scenarios, i.e., target prediction for robot table tennis and action recognition for interactive humanoid robots, are used to evaluate the performance of our inference algorithm. In both intention inference tasks, the proposed algorithm achieves substantial improvements over support vector machines and Gaussian processes.
Recommended from our members
Hierarchical policy design for sample-efficient learning of robot table tennis through self-play
Training robots with physical bodies requires developing new methods and action representations that allow the learning agents to explore the space of policies efficiently. This work studies sample-efficient learning of complex policies in the context of robot table tennis. It incorporates learning into a hierarchical control framework using a model-free strategy layer (which requires complex reasoning about opponents that is difficult to do in a model-based way), model-based prediction of external objects (which are difficult to control directly with analytic control methods, but governed by learnable and relatively simple laws of physics), and analytic controllers for the robot itself. Human demonstrations are used to train dynamics models, which together with the analytic controller allow any robot that is physically capable to play table tennis without training episodes. Using only about 7000 demonstrated trajectories, a striking policy can hit ball targets with about 20 cm error. Self-play is used to train cooperative and adversarial strategies on top of model-based striking skills trained from human demonstrations. After only about 24000 strikes in self-play the agent learns to best exploit the human dynamics models for longer cooperative games. Further experiments demonstrate that more flexible variants of the policy can discover new strikes not demonstrated by humans and achieve higher performance at the expense of lower sample-efficiency. Experiments are carried out in a virtual reality environment using sensory observations that are obtainable in the real world. The high sample-efficiency demonstrated in the evaluations show that the proposed method is suitable for learning directly on physical robots without transfer of models or policies from simulation.Computer Science
Modeling and Learning of Complex Motor Tasks: A Case Study with Robot Table Tennis
Most tasks that humans need to accomplished in their everyday life require certain motor skills. Although most motor skills seem to rely on the same elementary movements, humans are able to accomplish
many different tasks. Robots, on the other hand, are still limited to a small number of skills and depend on well-defined environments. Modeling new motor behaviors is therefore an important research area
in robotics. Computational models of human motor control are an essential step to construct robotic systems that are able to solve complex tasks in a human inhabited environment. These models can be
the key for robust, efficient, and human-like movement plans. In turn, the reproduction of human-like behavior on a robotic system can be also beneficial for computational neuroscientists to verify their
hypotheses. Although biomimetic models can be of great help in order to close the gap between human and robot motor abilities, these models are usually limited to the scenarios considered. However, one
important property of human motor behavior is the ability to adapt skills to new situations and to learn new motor skills with relatively few trials. Domain-appropriate machine learning techniques, such as supervised and reinforcement learning, have a great potential to enable robotic systems to autonomously
learn motor skills. In this thesis, we attempt to model and subsequently learn a complex motor task. As a test case
for a complex motor task, we chose robot table tennis throughout this thesis. Table tennis requires a series of time critical movements which have to be selected and adapted according to environmental
stimuli as well as the desired targets. We first analyze how humans play table tennis and create a computational model that results in human-like hitting motions on a robot arm. Our focus lies on
generating motor behavior capable of adapting to variations and uncertainties in the environmental conditions. We evaluate the resulting biomimetic model both in a physically realistic simulation and on a real anthropomorphic seven degrees of freedom Barrett WAM robot arm. This biomimetic model based purely on analytical methods produces successful hitting motions, but does not feature the flexibility found in human motor behavior. We therefore suggest a new framework that allows a robot to learn cooperative table tennis from and with a human. Here, the robot first learns a set of elementary hitting movements from a human teacher by kinesthetic teach-in, which is compiled into a set of motor primitives. To generalize these movements to a wider range of situations we introduce the mixture of motor primitives algorithm. The resulting motor policy enables the robot to select appropriate motor primitives as well as to generalize between them. Furthermore, it also allows to adapt the selection process of the hitting movements based on the outcome of previous trials. The framework is evaluated both in simulation and on a real Barrett WAM robot. In consecutive experiments, we show that our approach allows the robot to return balls from a ball launcher and furthermore to play table tennis with a human partner.
Executing robot movements using a biomimetic or learned approach enables the robot to return balls successfully. However, in motor tasks with a competitive goal such as table tennis, the robot not
only needs to return the balls successfully in order to accomplish the task, it also needs an adaptive strategy. Such a higher-level strategy cannot be programed manually as it depends on the opponent and the abilities of the robot. We therefore make a first step towards the goal of acquiring such a strategy and investigate the possibility of inferring strategic information from observing humans playing table tennis. We model table tennis as a Markov decision problem, where the reward function captures the goal of the task as well as knowledge on effective elements of a basic strategy. We show how this reward function, and therefore the strategic information can be discovered with model-free inverse reinforcement learning from human table tennis matches. The approach is evaluated on data collected from players with different playing styles and skill levels. We show that the resulting reward functions are able to capture expert-specific strategic information that allow to distinguish the expert among players with different playing skills as well as different playing styles. To summarize, in this thesis, we have derived a computational model for table tennis that was
successfully implemented on a Barrett WAM robot arm and that has proven to produce human-like hitting motions. We also introduced a framework for learning a complex motor task based on a library
of demonstrated hitting primitives. To select and generalize these hitting movements we developed the mixture of motor primitives algorithm where the selection process can be adapted online based
on the success of the synthesized hitting movements. The setup was tested on a real robot, which showed that the resulting robot table tennis player is able to play a cooperative game against an human
opponent. Finally, we could show that it is possible to infer basic strategic information in table tennis from observing matches of human players using model-free inverse reinforcement learning
Intention Inference and Decision Making with Hierarchical Gaussian Process Dynamics Models
Anticipation is crucial for fluent human-robot interaction, which allows a robot to independently coordinate its actions with human beings in joint activities. An anticipatory robot relies on a predictive model of its human partners, and selects its own action according to the model's predictions. Intention inference and decision making are key elements towards such anticipatory robots. In this thesis, we present a machine-learning approach to intention inference and decision making, based on Hierarchical Gaussian Process Dynamics Models (H-GPDMs).
We first introduce the H-GPDM, a class of generic latent-variable dynamics models. The H-GPDM represents the generative process of complex human movements that are directed by exogenous driving factors. Incorporating the exogenous variables in the dynamics model, the H-GPDM achieves improved interpretation, analysis, and prediction of human movements. While exact inference of the exogenous variables and the latent states is intractable, we introduce an approximate method using variational Bayesian inference, and demonstrate the merits of the H-GPDM in three different applications of human movement analysis. The H-GPDM lays a foundation for the following studies on intention inference and decision making.
Intention inference is an essential step towards anticipatory robots. For this purpose, we consider a special case of the H-GPDM, the Intention-Driven Dynamics Model (IDDM), which considers the human partners' intention as exogenous driving factors. The IDDM is applicable to intention inference from observed movements using Bayes' theorem, where the latent state variables are marginalized out.
As most robotics applications are subject to real-time constraints, we introduce an efficient online algorithm that allows for real-time intention inference. We show that the IDDM achieved state-of-the-art performance in intention inference using two human-robot interaction scenarios, i.e., target prediction for robot table tennis and action recognition for interactive robots.
Decision making based on a time series of predictions allows a robot to be proactive in its action selection, which involves a trade-off between the accuracy and confidence of the prediction and the time for executing a selected action. To address the problem of action selection and optimal timing for initiating the movement, we formulate the anticipatory action selection using Partially Observable Markov Decision Process, where the H-GPDM is adopted to update belief state and to estimate transition model. We present two approaches to policy learning and decision making, and show their effectiveness using human-robot table tennis.
In addition, we consider decision making solely based on the preference of the human partners, where observations are not sufficient for reliable intention inference. We formulate it as a repeated game and present a learning approach to safe strategies that exploit the humans' preferences. The learned strategy enables action selection when reliable intention inference is not available due to insufficient observation, e.g., for a robot to return served balls from a human table tennis player.
In this thesis, we use human-robot table tennis as a running example, where a key bottleneck is the limited amount of time for executing a hitting movement. Movement initiation usually requires an early decision on the type of action, such as a forehand or backhand hitting movement, at least 80ms before the opponent has hit the ball. The robot, therefore, needs to be anticipatory and proactive of the opponent's intended target. Using the proposed methods, the robot can predict the intended target of the opponent and initiate an appropriate hitting movement according to the prediction. Experimental results show that the proposed intention inference and decision making methods can substantially enhance the capability of the robot table tennis player, using both a physically realistic simulation and a real Barrett WAM robot arm with seven degrees of freedom
Robotic Table Tennis: A Case Study into a High Speed Learning System
We present a deep-dive into a real-world robotic learning system that, in
previous work, was shown to be capable of hundreds of table tennis rallies with
a human and has the ability to precisely return the ball to desired targets.
This system puts together a highly optimized perception subsystem, a high-speed
low-latency robot controller, a simulation paradigm that can prevent damage in
the real world and also train policies for zero-shot transfer, and automated
real world environment resets that enable autonomous training and evaluation on
physical robots. We complement a complete system description, including
numerous design decisions that are typically not widely disseminated, with a
collection of studies that clarify the importance of mitigating various sources
of latency, accounting for training and deployment distribution shifts,
robustness of the perception system, sensitivity to policy hyper-parameters,
and choice of action space. A video demonstrating the components of the system
and details of experimental results can be found at
https://youtu.be/uFcnWjB42I0.Comment: Published and presented at Robotics: Science and Systems (RSS2023
Thinking the GOAT: imitating tennis styles
A tactically aware coach is key to improving tennis players’ games; a coach analyses past matches with two considerations in mind: 1) the style of the player and how that style translates to real-world shot-making, and 2) the intent of a shot, irrespective of the outcome. Modern Hawk-Eye technology deployed in top-tier tournaments has enabled deeper analysis of professional matches than ever before. The aim of this paper is to emulate and augment the qualities of great coaches using data collected by Hawk-Eye; we develop a deep learning approach to imitate tennis players’ responses, to learn individual player styles efficiently, and we demonstrate this using performance metrics and illustrations
- …