23 research outputs found

    ILoSA: Interactive Learning of Stiffness and Attractors

    Full text link
    Teaching robots how to apply forces according to our preferences is still an open challenge that has to be tackled from multiple engineering perspectives. This paper studies how to learn variable impedance policies where both the Cartesian stiffness and the attractor can be learned from human demonstrations and corrections with a user-friendly interface. The presented framework, named ILoSA, uses Gaussian Processes for policy learning, identifying regions of uncertainty and allowing interactive corrections, stiffness modulation and active disturbance rejection. The experimental evaluation of the framework is carried out on a Franka-Emika Panda in three separate cases with unique force interaction properties: 1) pulling a plug wherein a sudden force discontinuity occurs upon successful removal of the plug, 2) pushing a box where a sustained force is required to keep the robot in motion, and 3) wiping a whiteboard in which the force is applied perpendicular to the direction of movement

    A task-parameterized probabilistic model with minimal intervention control

    Get PDF
    We present a task-parameterized probabilistic model encoding movements in the form of virtual spring-damper systems acting in multiple frames of reference. Each candidate coordinate system observes a set of demonstrations from its own perspective, by extracting an attractor path whose variations depend on the relevance of the frame at each step of the task. This information is exploited to generate new attractor paths in new situations (new position and orientation of the frames), with the predicted covariances used to estimate the varying stiffness and damping of the spring-damper systems, resulting in a minimal intervention control strategy. The approach is tested with a 7-DOFs Barrett WAM manipulator whose movement and impedance behavior need to be modulated in regard to the position and orientation of two external objects varying during demonstration and reproduction

    Stability-Guaranteed Reinforcement Learning for Contact-rich Manipulation

    Full text link
    Reinforcement learning (RL) has had its fair share of success in contact-rich manipulation tasks but it still lags behind in benefiting from advances in robot control theory such as impedance control and stability guarantees. Recently, the concept of variable impedance control (VIC) was adopted into RL with encouraging results. However, the more important issue of stability remains unaddressed. To clarify the challenge in stable RL, we introduce the term all-the-time-stability that unambiguously means that every possible rollout will be stability certified. Our contribution is a model-free RL method that not only adopts VIC but also achieves all-the-time-stability. Building on a recently proposed stable VIC controller as the policy parameterization, we introduce a novel policy search algorithm that is inspired by Cross-Entropy Method and inherently guarantees stability. Our experimental studies confirm the feasibility and usefulness of stability guarantee and also features, to the best of our knowledge, the first successful application of RL with all-the-time-stability on the benchmark problem of peg-in-hole.Comment: Accepted at Robotics and Automation Letter

    Learning Barrier Functions for Constrained Motion Planning with Dynamical Systems

    Full text link
    Stable dynamical systems are a flexible tool to plan robotic motions in real-time. In the robotic literature, dynamical system motions are typically planned without considering possible limitations in the robot's workspace. This work presents a novel approach to learn workspace constraints from human demonstrations and to generate motion trajectories for the robot that lie in the constrained workspace. Training data are incrementally clustered into different linear subspaces and used to fit a low dimensional representation of each subspace. By considering the learned constraint subspaces as zeroing barrier functions, we are able to design a control input that keeps the system trajectory within the learned bounds. This control input is effectively combined with the original system dynamics preserving eventual asymptotic properties of the unconstrained system. Simulations and experiments on a real robot show the effectiveness of the proposed approach

    Active Incremental Learning of Robot Movement Primitives

    Get PDF
    International audienceRobots that can learn over time by interacting with non-technical users must be capable of acquiring new motor skills, incrementally. The problem then is deciding when to teach the robot a new skill or when to rely on the robot generalizing its actions. This decision can be made by the robot if it is provided with means to quantify the suitability of its own skill given an unseen task. To this end, we present an algorithm that allows a robot to make active requests to incremen-tally learn movement primitives. A movement primitive is learned on a trajectory output by a Gaussian Process. The latter is used as a library of demonstrations that can be extrapolated with confidence margins. This combination not only allows the robot to generalize using as few as a single demonstration but more importantly , to indicate when such generalization can be executed with confidence or not. In experiments, a real robot arm indicates to the user which demonstrations should be provided to increase its repertoire of reaching skills. Experiments will also show that the robot becomes confident in reaching objects for whose demonstrations were never provided, by incrementally learning from the neighboring demonstrations
    corecore