48 research outputs found

    Kernelized movement primitives

    Get PDF
    Imitation learning has been studied widely as a convenient way to transfer human skills to robots. This learning approach is aimed at extracting relevant motion patterns from human demonstrations and subsequently applying these patterns to different situations. Despite the many advancements that have been achieved, solutions for coping with unpredicted situations (e.g., obstacles and external perturbations) and high-dimensional inputs are still largely absent. In this paper, we propose a novel kernelized movement primitive (KMP), which allows the robot to adapt the learned motor skills and fulfill a variety of additional constraints arising over the course of a task. Specifically, KMP is capable of learning trajectories associated with high-dimensional inputs owing to the kernel treatment, which in turn renders a model with fewer open parameters in contrast to methods that rely on basis functions. Moreover, we extend our approach by exploiting local trajectory representations in different coordinate systems that describe the task at hand, endowing KMP with reliable extrapolation capabilities in broader domains. We apply KMP to the learning of time-driven trajectories as a special case, where a compact parametric representation describing a trajectory and its first-order derivative is utilized. In order to verify the effectiveness of our method, several examples of trajectory modulations and extrapolations associated with time inputs, as well as trajectory adaptations with high-dimensional inputs are provided

    Uncertainty-Aware Imitation Learning using Kernelized Movement Primitives

    Get PDF
    During the past few years, probabilistic approaches to imitation learning have earned a relevant place in the literature. One of their most prominent features, in addition to extracting a mean trajectory from task demonstrations, is that they provide a variance estimation. The intuitive meaning of this variance, however, changes across different techniques, indicating either variability or uncertainty. In this paper we leverage kernelized movement primitives (KMP) to provide a new perspective on imitation learning by predicting variability, correlations and uncertainty about robot actions. This rich set of information is used in combination with optimal controller fusion to learn actions from data, with two main advantages: i) robots become safe when uncertain about their actions and ii) they are able to leverage partial demonstrations, given as elementary sub-tasks, to optimally perform a higher level, more complex task. We showcase our approach in a painting task, where a human user and a KUKA robot collaborate to paint a wooden board. The task is divided into two sub-tasks and we show that using our approach the robot becomes compliant (hence safe) outside the training regions and executes the two sub-tasks with optimal gains.Comment: Published in the proceedings of IROS 201

    Uncertainty-Aware Imitation Learning using Kernelized Movement Primitives

    Get PDF
    During the past few years, probabilistic approaches to imitation learning have earned a relevant place in the literature. One of their most prominent features, in addition to extracting a mean trajectory from task demonstrations, is that they provide a variance estimation. The intuitive meaning of this variance, however, changes across different techniques, indicating either variability or uncertainty. In this paper we leverage kernelized movement primitives (KMP) to provide a new perspective on imitation learning by predicting variability, correlations and uncertainty about robot actions. This rich set of information is used in combination with optimal controller fusion to learn actions from data, with two main advantages: i) robots become safe when uncertain about their actions and ii) they are able to leverage partial demonstrations, given as elementary sub-tasks, to optimally perform a higher level, more complex task. We showcase our approach in a painting task, where a human user and a KUKA robot collaborate to paint a wooden board. The task is divided into two sub-tasks and we show that using our approach the robot becomes compliant (hence safe) outside the training regions and executes the two sub-tasks with optimal gains.Comment: Submitted to IROS1

    Learning Motion Primitives Automata for Autonomous Driving Applications

    Get PDF
    Motion planning methods often rely on libraries of primitives. The selection of primitives is then crucial for assuring feasible solutions and good performance within the motion planner. In the literature, the library is usually designed by either learning from demonstration, relying entirely on data, or by model-based approaches, with the advantage of exploiting the dynamical system’s property, e.g., symmetries. In this work, we propose a method combining data with a dynamical model to optimally select primitives. The library is designed based on primitives with highest occurrences within the data set, while Lie group symmetries from a model are analysed in the available data to allow for structure-exploiting primitives. We illustrate our technique in an autonomous driving application. Primitives are identified based on data from human driving, with the freedom to build libraries of different sizes as a parameter of choice. We also compare the extracted library with a custom selection of primitives regarding the performance of obtained solutions for a street layout based on a real-world scenario

    Leveraging Kernelized Synergies on Shared Subspace for Precision Grasping and Dexterous Manipulation

    Get PDF
    Manipulation in contrast to grasping is a trajectorial task that needs to use dexterous hands. Improving the dexterity of robot hands, increases the controller complexity and thus requires to use the concept of postural synergies. Inspired from postural synergies, this research proposes a new framework called kernelized synergies that focuses on the re-usability of same subspace for precision grasping and dexterous manipulation. In this work, the computed subspace of postural synergies is parameterized by kernelized movement primitives to preserve its grasping and manipulation characteristics and allows its reuse for new objects. The grasp stability of proposed framework is assessed with the force closure quality index, as a cost function. For performance evaluation, the proposed framework is initially tested on two different simulated robot hand models using the Syngrasp toolbox and experimentally, four complex grasping and manipulation tasks are performed and reported. Results confirm the hand agnostic approach of proposed framework and its generalization to distinct objects irrespective of their dimensions

    Task-Adaptive Robot Learning from Demonstration with Gaussian Process Models under Replication

    Get PDF
    Learning from Demonstration (LfD) is a paradigm that allows robots to learn complex manipulation tasks that can not be easily scripted, but can be demonstrated by a human teacher. One of the challenges of LfD is to enable robots to acquire skills that can be adapted to different scenarios. In this paper, we propose to achieve this by exploiting the variations in the demonstrations to retrieve an adaptive and robust policy, using Gaussian Process (GP) models. Adaptability is enhanced by incorporating task parameters into the model, which encode different specifications within the same task. With our formulation, these parameters can be either real, integer, or categorical. Furthermore, we propose a GP design that exploits the structure of replications, i.e., repeated demonstrations with identical conditions within data. Our method significantly reduces the computational cost of model fitting in complex tasks, where replications are essential to obtain a robust model. We illustrate our approach through several experiments on a handwritten letter demonstration dataset.Comment: 8 pages, 9 figure

    Impact-Friendly Object Catching at Non-Zero Velocity Based on Combined Optimization and Learning

    Full text link
    This paper proposes a combined optimization and learning method for impact-friendly, non-prehensile catching of objects at non-zero velocity. Through a constrained Quadratic Programming problem, the method generates optimal trajectories up to the contact point between the robot and the object to minimize their relative velocity and reduce the impact forces. Next, the generated trajectories are updated by Kernelized Movement Primitives, which are based on human catching demonstrations to ensure a smooth transition around the catching point. In addition, the learned human variable stiffness (HVS) is sent to the robot's Cartesian impedance controller to absorb the post-impact forces and stabilize the catching position. Three experiments are conducted to compare our method with and without HVS against a fixed-position impedance controller (FP-IC). The results showed that the proposed methods outperform the FP-IC while adding HVS yields better results for absorbing the post-impact forces.Comment: 8 pages, 9 figures, accepted by 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023
    corecore