197 research outputs found

    Human movement learning with dynamic movement primitives combined with mixture models

    Get PDF
    The proposed research is to provide a probabilistic approach to learn human movements. Dynamical Movement Primitives (DMP) have been extensively used in robotics in order to learn human motions [1]. The DMP modulates a virtual spring with a learned non-linear force profile /(x), perturbing the system to make it follow a desired trajectory

    Bipedal Walking Energy Minimization by Reinforcement Learning with Evolving Policy Parameterization

    No full text
    We present a learning-based approach for minimizing the electric energy consumption during walking of a passively-compliant bipedal robot. The energy consumption is reduced by learning a varying-height center-of-mass trajectory which uses efficiently the robots passive compliance. To do this, we propose a reinforcement learning method which evolves the policy parameterization dynamically during the learning process and thus manages to find better policies faster than by using fixed parameterization. The method is first tested on a function approximation task, and then applied to the humanoid robot COMAN where it achieves significant energy reduction. © 2011 IEEE

    Supervisory teleoperation with online learning and optimal control

    Get PDF
    We present a general approach for online learning and optimal control of manipulation tasks in a supervisory teleoperation context, targeted to underwater remotely operated vehicles (ROVs). We use an online Bayesian nonparametric learning algorithm to build models of manipulation motions as task-parametrized hidden semi-Markov models (TP-HSMM) that capture the spatiotemporal characteristics of demonstrated motions in a probabilistic representation. Motions are then executed autonomously using an optimal controller, namely a model predictive control (MPC) approach in a receding horizon fashion. This way the remote system locally closes a high-frequency control loop that robustly handles noise and dynamically changing environments. Our system automates common and recurring tasks, allowing the operator to focus only on the tasks that genuinely require human intervention. We demonstrate how our solution can be used for a hot-stabbing motion in an underwater teleoperation scenario. We evaluate the performance of the system over multiple trials and compare with a state-of-the-art approach. We report that our approach generalizes well with only a few demonstrations, accurately performs the learned task and adapts online to dynamically changing task conditions

    Stochastic Gesture Production and Recognition Model for a Humanoid Robot

    Get PDF
    Robot Programming by Demonstration (PbD) aims at developing adaptive and robust controllers to enable the robot to learn new skills by observing and imitating a human demonstration. While the vast majority of PbD works focused on systems that learn a specific subset of tasks, our work explores the problem of recognition, generalization, and reproduction of tasks in a unified mathematical framework. The approach makes abstraction of the task and dataset at hand to tackle the general issue of learning which of the features are the relevant ones to imitate. In this paper, we present an implementation of this framework to the determination of the optimal strategy to reproduce arbitrary gestures. The model is tested and validated on a humanoid robot, using recordings of the kinematics of the demonstrator's arm motion. The hand path and joint angle trajectories are encoded in Hidden Markov Models. The system uses the optimal prediction of the models to generate the reproduction of the motion

    What is the Teacher's Role in Robot Programming by Demonstration? - Toward Benchmarks for Improved Learning

    Get PDF
    Robot programming by demonstration (RPD) covers methods by which a robot learns new skills through human guidance. We present an interactive, multimodal RPD framework using active teaching methods that places the human teacher in the robot's learning loop. Two experiments are presented in which observational learning is first used to demonstrate a manipulation skill to a HOAP-3 humanoid robot by using motion sensors attached to the teacher's body. Then, putting the robot through the motion, the teacher incrementally refines the robot's skill by moving its arms manually, providing the appropriate scaffolds to reproduce the action. An incremental teaching scenario is proposed based on insights from various fields addressing developmental, psychological, and social issues related to teaching mechanisms in humans. Based on this analysis, different benchmarks are suggested to evaluate the setup further

    Learning assistive teleoperation behaviors from demonstration

    Get PDF
    Emergency response in hostile environments often involves remotely operated vehicles (ROVs) that are teleoperated as interaction with the environment is typically required. Many ROV tasks are common to such scenarios and are often recurrent. We show how a probabilistic approach can be used to learn a task behavior model from data. Such a model can then be used to assist an operator performing the same task in future missions. We show how this approach can capture behaviors (constraints) that are present in the training data, and how this model can be combined with the operator’s input online. We present an illustrative planar example and elaborate with a non-Destructive testing (NDT) scanning task on a teleoperation mock-up using a two-armed Baxter robot. We demonstrate how our approach can learn from examples task specific behaviors and automatically control the overall system, combining the operator’s input and the learned model online, in an assistive teleoperation manner. This can potentially reduce the time and effort required to perform teleoperation tasks that are commonplace to ROV missions in the context of security, maintenance and rescue robotics

    Probabilistic Learning of Torque Controllers from Kinematic and Force Constraints

    Get PDF
    When learning skills from demonstrations, one is often required to think in advance about the appropriate task representation (usually in either operational or configuration space). We here propose a probabilistic approach for simultaneously learning and synthesizing torque control commands which take into account task space, joint space and force constraints. We treat the problem by considering different torque controllers acting on the robot, whose relevance is learned probabilistically from demonstrations. This information is used to combine the controllers by exploiting the properties of Gaussian distributions, generating new torque commands that satisfy the important features of the task. We validate the approach in two experimental scenarios using 7- DoF torque-controlled manipulators, with tasks that require the consideration of different controllers to be properly executed

    A framework integrating statistical and social cues to teach a humanoid robot new skills

    Get PDF
    Bringing robots as collaborative partners into homes presents various challenges to human-robot interaction. Robots will need to interact with untrained users in environments that are originally designed for humans. Compared to their industrial homologous form, humanoid robots can not be preprogrammed with an initial set of behaviours. They should adapt their skills to a huge range of possible tasks without needing to change the environments and tools to fit their needs. The rise of these humanoids implies an inherent social dimension to this technology, where the end-users should be able to teach new skills to these robots in an intuitive manner, relying only on their experience in teaching new skills to other human partners. Our research aims at designing a generic Robot Programming by Demonstration (RPD) framework based on a probabilistic representation of the task constraints, which allows to integrate information from cross-situational statistics and from various social cues such as joint attention or vocal intonation. This paper presents our ongoing research towards bringing user- friendly human-robot teaching systems that would speed up the skill transfer process
    corecore