61 research outputs found

    Robot learning from demonstration of force-based manipulation tasks

    Get PDF
    One of the main challenges in Robotics is to develop robots that can interact with humans in a natural way, sharing the same dynamic and unstructured environments. Such an interaction may be aimed at assisting, helping or collaborating with a human user. To achieve this, the robot must be endowed with a cognitive system that allows it not only to learn new skills from its human partner, but also to refine or improve those already learned. In this context, learning from demonstration appears as a natural and userfriendly way to transfer knowledge from humans to robots. This dissertation addresses such a topic and its application to an unexplored field, namely force-based manipulation tasks learning. In this kind of scenarios, force signals can convey data about the stiffness of a given object, the inertial components acting on a tool, a desired force profile to be reached, etc. Therefore, if the user wants the robot to learn a manipulation skill successfully, it is essential that its cognitive system is able to deal with force perceptions. The first issue this thesis tackles is to extract the input information that is relevant for learning the task at hand, which is also known as the what to imitate? problem. Here, the proposed solution takes into consideration that the robot actions are a function of sensory signals, in other words the importance of each perception is assessed through its correlation with the robot movements. A Mutual Information analysis is used for selecting the most relevant inputs according to their influence on the output space. In this way, the robot can gather all the information coming from its sensory system, and the perception selection module proposed here automatically chooses the data the robot needs to learn a given task. Having selected the relevant input information for the task, it is necessary to represent the human demonstrations in a compact way, encoding the relevant characteristics of the data, for instance, sequential information, uncertainty, constraints, etc. This issue is the next problem addressed in this thesis. Here, a probabilistic learning framework based on hidden Markov models and Gaussian mixture regression is proposed for learning force-based manipulation skills. The outstanding features of such a framework are: (i) it is able to deal with the noise and uncertainty of force signals because of its probabilistic formulation, (ii) it exploits the sequential information embedded in the model for managing perceptual aliasing and time discrepancies, and (iii) it takes advantage of task variables to encode those force-based skills where the robot actions are modulated by an external parameter. Therefore, the resulting learning structure is able to robustly encode and reproduce different manipulation tasks. After, this thesis goes a step forward by proposing a novel whole framework for learning impedance-based behaviors from demonstrations. The key aspects here are that this new structure merges vision and force information for encoding the data compactly, and it allows the robot to have different behaviors by shaping its compliance level over the course of the task. This is achieved by a parametric probabilistic model, whose Gaussian components are the basis of a statistical dynamical system that governs the robot motion. From the force perceptions, the stiffness of the springs composing such a system are estimated, allowing the robot to shape its compliance. This approach permits to extend the learning paradigm to other fields different from the common trajectory following. The proposed frameworks are tested in three scenarios, namely, (a) the ball-in-box task, (b) drink pouring, and (c) a collaborative assembly, where the experimental results evidence the importance of using force perceptions as well as the usefulness and strengths of the methods

    Geometry-aware Manipulability Learning, Tracking and Transfer

    Full text link
    Body posture influences human and robots performance in manipulation tasks, as appropriate poses facilitate motion or force exertion along different axes. In robotics, manipulability ellipsoids arise as a powerful descriptor to analyze, control and design the robot dexterity as a function of the articulatory joint configuration. This descriptor can be designed according to different task requirements, such as tracking a desired position or apply a specific force. In this context, this paper presents a novel \emph{manipulability transfer} framework, a method that allows robots to learn and reproduce manipulability ellipsoids from expert demonstrations. The proposed learning scheme is built on a tensor-based formulation of a Gaussian mixture model that takes into account that manipulability ellipsoids lie on the manifold of symmetric positive definite matrices. Learning is coupled with a geometry-aware tracking controller allowing robots to follow a desired profile of manipulability ellipsoids. Extensive evaluations in simulation with redundant manipulators, a robotic hand and humanoids agents, as well as an experiment with two real dual-arm systems validate the feasibility of the approach.Comment: Accepted for publication in the Intl. Journal of Robotics Research (IJRR). Website: https://sites.google.com/view/manipulability. Code: https://github.com/NoemieJaquier/Manipulability. 24 pages, 20 figures, 3 tables, 4 appendice

    Learning Task Priorities from Demonstrations

    Full text link
    Bimanual operations in humanoids offer the possibility to carry out more than one manipulation task at the same time, which in turn introduces the problem of task prioritization. We address this problem from a learning from demonstration perspective, by extending the Task-Parameterized Gaussian Mixture Model (TP-GMM) to Jacobian and null space structures. The proposed approach is tested on bimanual skills but can be applied in any scenario where the prioritization between potentially conflicting tasks needs to be learned. We evaluate the proposed framework in: two different tasks with humanoids requiring the learning of priorities and a loco-manipulation scenario, showing that the approach can be exploited to learn the prioritization of multiple tasks in parallel.Comment: Accepted for publication at the IEEE Transactions on Robotic

    Robot learning of container-emptying skills through haptic demonstration

    Get PDF
    Abstract Locally weighted learning algorithms are suitable strategies for trajectory learning and skill acquisition, in the context of programming by demonstration. Input streams other than visual information, as used in most applications up to date, reveal themselves as quite useful in trajectory learning experiments where visual sources are not available. In this work we have used force/torque feedback through a haptic device for teaching a teleoperated robot to empty a rigid container. Structure vibrations and container inertia appeared to considerably disrupt the sensing process, so a filtering algorithm had to be devised. Then, the memory-based LWPLS and the non-memory-based LWPR algorithm

    Analysis and Transfer of Human Movement Manipulability in Industry-like Activities

    Full text link
    Humans exhibit outstanding learning, planning and adaptation capabilities while performing different types of industrial tasks. Given some knowledge about the task requirements, humans are able to plan their limbs motion in anticipation of the execution of specific skills. For example, when an operator needs to drill a hole on a surface, the posture of her limbs varies to guarantee a stable configuration that is compatible with the drilling task specifications, e.g. exerting a force orthogonal to the surface. Therefore, we are interested in analyzing the human arms motion patterns in industrial activities. To do so, we build our analysis on the so-called manipulability ellipsoid, which captures a posture-dependent ability to perform motion and exert forces along different task directions. Through thorough analysis of the human movement manipulability, we found that the ellipsoid shape is task dependent and often provides more information about the human motion than classical manipulability indices. Moreover, we show how manipulability patterns can be transferred to robots by learning a probabilistic model and employing a manipulability tracking controller that acts on the task planning and execution according to predefined control hierarchies.Comment: Accepted for publication in IROS'20. Website: https://sites.google.com/view/manipulability/home . Video: https://youtu.be/q0GZwvwW9A

    Sharpening haptic inputs for teaching a manipulation skill to a robot

    Get PDF
    8 páginas.-- Comunicación presentada al 1st International Conference on Applied Bionics and Biomechanics celebrado en Venecia (Italia) en Octubre de 2010.Gaussian mixtures-based learning algorithms are suitable strategies for trajectory learning and skill acquisition, in the context of programming by demonstration (PbD). Input streams other than visual information, as used in most applications up to date, reveal themselves as quite useful in trajectory learning experiments where visual sources are not available. In this work we have used force/torque feedback through a haptic device for teaching a teleoperated robot to empty a rigid container. Structure vibrations and container inertia appeared to considerably disrupt the sensing process, so a filtering algorithm had to be devised. Moreover, some input variables seemed much more relevant to the particular task to be learned than others, which lead us to analyze the training data in order to select those relevant features through principal component analysis and a mutual information criterion. Then, a batch version of GMM/GMR [1], [2] was implemented using different training datasets (original, pre-processed data through PCA and MI). Tests where the teacher was instructed to follow a strategy compared to others where he was not lead to useful conclusions that permit devising the new research stages.This work has been partially supported by the European projects PACO-PLUS (IST-4-27657) and GARNICS (FP7-247947), the Spanish project Multimodal Interaction in Pattern Recognition and Computer Vision (MIPRCV) (Consolider Ingenio 2010 project CSD2007-00018) and the Robotics group of the Generalitat de Catalunya. L. Rozo was supported by the CSIC under a JAE-PREDOC scholarship.Peer reviewe

    Learning Riemannian Stable Dynamical Systems via Diffeomorphisms

    Get PDF
    Dexterous and autonomous robots should be capable of executing elaborated dynamical motions skillfully. Learning techniques may be leveraged to build models of such dynamic skills. To accomplish this, the learning model needs to encode a stable vector field that resembles the desired motion dynamics. This is challenging as the robot state does not evolve on a Euclidean space, and therefore the stability guarantees and vector field encoding need to account for the geometry arising from, for example, the orientation representation. To tackle this problem, we propose learning Riemannian stable dynamical systems (RSDS) from demonstrations, allowing us to account for different geometric constraints resulting from the dynamical system state representation. Our approach provides Lyapunov-stability guarantees on Riemannian manifolds that are enforced on the desired motion dynamics via diffeomorphisms built on neural manifold ODEs. We show that our Riemannian approach makes it possible to learn stable dynamical systems displaying complicated vector fields on both illustrative examples and real-world manipulation tasks, where Euclidean approximations fail.Comment: To appear at CoRL 202

    Robot learning from demonstration of force-based tasks with multiple solution trajectories

    Get PDF
    A learning framework with a bidirectional communication channel is proposed, where a human performs several demonstrations of a task using a haptic device (providing him/her with force-torque feedback) while a robot captures these executions using only its force-based perceptive system. Our work departs from the usual approaches to learning by demonstration in that the robot has to execute the task blindly, relying only on force-torque perceptions, and, more essential, we address goal-driven manipulation tasks with multiple solution trajectories, whereas most works tackle tasks that can be learned by just finding a generalization at the trajectory level. To cope with these multiple-solution tasks, in our framework demonstrations are represented by means of a Hidden Markov Model (HMM) and the robot reproduction of the task is performed using a modified version of Gaussian Mixture Regression that incorporates temporal information (GMRa) through the forward variable of the HMM. Also, we exploit the haptic device as a teaching and communication tool in a human-robot interaction context, as an alternative to kinesthetic-based teaching systems. Results show that the robot is able to learn a container-emptying task relying only on force-based perceptions and to achieve the goal from several non-trained initial conditions.Postprint (author’s final draft
    • …
    corecore