4,470 research outputs found

    Using probabilistic movement primitives in robotics

    Get PDF
    Movement Primitives are a well-established paradigm for modular movement representation and generation. They provide a data-driven representation of movements and support generalization to novel situations, temporal modulation, sequencing of primitives and controllers for executing the primitive on physical systems. However, while many MP frameworks exhibit some of these properties, there is a need for a unified framework that implements all of them in a principled way. In this paper, we show that this goal can be achieved by using a probabilistic representation. Our approach models trajectory distributions learned from stochastic movements. Probabilistic operations, such as conditioning can be used to achieve generalization to novel situations or to combine and blend movements in a principled way. We derive a stochastic feedback controller that reproduces the encoded variability of the movement and the coupling of the degrees of freedom of the robot. We evaluate and compare our approach on several simulated and real robot scenarios

    Low-cost Sensor Glove with Force Feedback for Learning from Demonstrations using Probabilistic Trajectory Representations

    Full text link
    Sensor gloves are popular input devices for a large variety of applications including health monitoring, control of music instruments, learning sign language, dexterous computer interfaces, and tele-operating robot hands. Many commercial products as well as low-cost open source projects have been developed. We discuss here how low-cost (approx. 250 EUROs) sensor gloves with force feedback can be build, provide an open source software interface for Matlab and present first results in learning object manipulation skills through imitation learning on the humanoid robot iCub.Comment: 3 pages, 3 figures. Workshop paper of the International Conference on Robotics and Automation (ICRA 2015

    Learning to Segment and Represent Motion Primitives from Driving Data for Motion Planning Applications

    Full text link
    Developing an intelligent vehicle which can perform human-like actions requires the ability to learn basic driving skills from a large amount of naturalistic driving data. The algorithms will become efficient if we could decompose the complex driving tasks into motion primitives which represent the elementary compositions of driving skills. Therefore, the purpose of this paper is to segment unlabeled trajectory data into a library of motion primitives. By applying a probabilistic inference based on an iterative Expectation-Maximization algorithm, our method segments the collected trajectories while learning a set of motion primitives represented by the dynamic movement primitives. The proposed method utilizes the mutual dependencies between the segmentation and representation of motion primitives and the driving-specific based initial segmentation. By utilizing this mutual dependency and the initial condition, this paper presents how we can enhance the performance of both the segmentation and the motion primitive library establishment. We also evaluate the applicability of the primitive representation method to imitation learning and motion planning algorithms. The model is trained and validated by using the driving data collected from the Beijing Institute of Technology intelligent vehicle platform. The results show that the proposed approach can find the proper segmentation and establish the motion primitive library simultaneously

    Learning Human-Robot Collaboration Insights through the Integration of Muscle Activity in Interaction Motion Models

    Full text link
    Recent progress in human-robot collaboration makes fast and fluid interactions possible, even when human observations are partial and occluded. Methods like Interaction Probabilistic Movement Primitives (ProMP) model human trajectories through motion capture systems. However, such representation does not properly model tasks where similar motions handle different objects. Under current approaches, a robot would not adapt its pose and dynamics for proper handling. We integrate the use of Electromyography (EMG) into the Interaction ProMP framework and utilize muscular signals to augment the human observation representation. The contribution of our paper is increased task discernment when trajectories are similar but tools are different and require the robot to adjust its pose for proper handling. Interaction ProMPs are used with an augmented vector that integrates muscle activity. Augmented time-normalized trajectories are used in training to learn correlation parameters and robot motions are predicted by finding the best weight combination and temporal scaling for a task. Collaborative single task scenarios with similar motions but different objects were used and compared. For one experiment only joint angles were recorded, for the other EMG signals were additionally integrated. Task recognition was computed for both tasks. Observation state vectors with augmented EMG signals were able to completely identify differences across tasks, while the baseline method failed every time. Integrating EMG signals into collaborative tasks significantly increases the ability of the system to recognize nuances in the tasks that are otherwise imperceptible, up to 74.6% in our studies. Furthermore, the integration of EMG signals for collaboration also opens the door to a wide class of human-robot physical interactions based on haptic communication that has been largely unexploited in the field.Comment: 7 pages, 2 figures, 2 tables. As submitted to Humanoids 201

    Learning Task Priorities from Demonstrations

    Full text link
    Bimanual operations in humanoids offer the possibility to carry out more than one manipulation task at the same time, which in turn introduces the problem of task prioritization. We address this problem from a learning from demonstration perspective, by extending the Task-Parameterized Gaussian Mixture Model (TP-GMM) to Jacobian and null space structures. The proposed approach is tested on bimanual skills but can be applied in any scenario where the prioritization between potentially conflicting tasks needs to be learned. We evaluate the proposed framework in: two different tasks with humanoids requiring the learning of priorities and a loco-manipulation scenario, showing that the approach can be exploited to learn the prioritization of multiple tasks in parallel.Comment: Accepted for publication at the IEEE Transactions on Robotic

    Gaussian-Process-based Robot Learning from Demonstration

    Full text link
    Endowed with higher levels of autonomy, robots are required to perform increasingly complex manipulation tasks. Learning from demonstration is arising as a promising paradigm for transferring skills to robots. It allows to implicitly learn task constraints from observing the motion executed by a human teacher, which can enable adaptive behavior. We present a novel Gaussian-Process-based learning from demonstration approach. This probabilistic representation allows to generalize over multiple demonstrations, and encode variability along the different phases of the task. In this paper, we address how Gaussian Processes can be used to effectively learn a policy from trajectories in task space. We also present a method to efficiently adapt the policy to fulfill new requirements, and to modulate the robot behavior as a function of task variability. This approach is illustrated through a real-world application using the TIAGo robot.Comment: 8 pages, 10 figure
    corecore