94 research outputs found

    Real-time temporal adaptation of dynamic movement primitives for moving targets

    Get PDF
    This work is aimed at extending the standard dynamic movement primitives (DMP) framework to adapt to real-time changes in the task execution time while preserving its style characteristics. We propose an alternative polynomial canonical system and an adaptive law allowing a higher degree of control over the execution time. The extended framework has a potential application in robotic manipulation tasks that involve moving objects demanding real-time control over the task execution time. The existing methods require a computationally expensive forward simulation of DMP at every time step which makes it undesirable for integration in realtime control systems. To address this deficiency, the behaviour of the canonical system has been adapted according to the changes in the desired execution time of the task performed. An alternative polynomial canonical system is proposed to provide increased real-time control on the temporal scaling of DMP system compared to the standard exponential canonical system. The developed method was evaluated on scenarios of tracking a moving target where the desired tracking time is varied in real-time. The results presented show that the extended version of DMP provide better control over the temporal scaling during the execution of the task. We have evaluated our approach on a UR5 robotic manipulator for tracking a moving object.acceptedVersio

    Overcoming some drawbacks of Dynamic Movement Primitives

    Get PDF
    Dynamic Movement Primitives (DMPs) is a framework for learning a point-to-point trajectory from a demonstration. Despite being widely used, DMPs still present some shortcomings that may limit their usage in real robotic applications. Firstly, at the state of the art, mainly Gaussian basis functions have been used to perform function approximation. Secondly, the adaptation of the trajectory generated by the DMP heavily depends on the choice of hyperparameters and the new desired goal position. Lastly, DMPs are a framework for ‘one-shot learning’, meaning that they are constrained to learn from a unique demonstration. In this work, we present and motivate a new set of basis functions to be used in the learning process, showing their ability to accurately approximate functions while having both analytical and numerical advantages w.r.t. Gaussian basis functions. Then, we show how to use the invariance of DMPs w.r.t. affine transformations to make the generalization of the trajectory robust against both the choice of hyperparameters and new goal position, performing both synthetic tests and experiments with real robots to show this increased robustness. Finally, we propose an algorithm to extract a common behavior from multiple observations, validating it both on a synthetic dataset and on a dataset obtained by performing a task on a real robot

    Autonomous task planning and situation awareness in robotic surgery

    Get PDF
    The use of robots in minimally invasive surgery has improved the quality of standard surgical procedures. So far, only the automation of simple surgical actions has been investigated by researchers, while the execution of structured tasks requiring reasoning on the environment and the choice among multiple actions is still managed by human surgeons. In this paper, we propose a framework to implement surgical task automation. The framework consists of a task-level reasoning module based on answer set programming, a low-level motion planning module based on dynamic movement primitives, and a situation awareness module. The logic-based reasoning module generates explainable plans and is able to recover from failure conditions, which are identified and explained by the situation awareness module interfacing to a human supervisor, for enhanced safety. Dynamic Movement Primitives allow to replicate the dexterity of surgeons and to adapt to obstacles and changes in the environment. The framework is validated on different versions of the standard surgical training peg-and-ring task.Comment: Submitted to IROS 2020 conferenc

    Learning Generalization and Adaptation of Movement Primitives for Humanoid Robots

    Get PDF

    Learning of Surgical Gestures for Robotic Minimally Invasive Surgery Using Dynamic Movement Primitives and Latent Variable Models

    Get PDF
    Full and partial automation of Robotic Minimally Invasive Surgery holds significant promise to improve patient treatment, reduce recovery time, and reduce the fatigue of the surgeons. However, to accomplish this ambitious goal, a mathematical model of the intervention is needed. In this thesis, we propose to use Dynamic Movement Primitives (DMPs) to encode the gestures a surgeon has to perform to achieve a task. DMPs allow to learn a trajectory, thus imitating the dexterity of the surgeon, and to execute it while allowing to generalize it both spatially (to new starting and goal positions) and temporally (to different speeds of executions). Moreover, they have other desirable properties that make them well suited for surgical applications, such as online adaptability, robustness to perturbations, and the possibility to implement obstacle avoidance. We propose various modifications to improve the state-of-the-art of the framework, as well as novel methods to handle obstacles. Moreover, we validate the usage of DMPs to model gestures by automating a surgical-related task and using DMPs as the low-level trajectory generator. In the second part of the thesis, we introduce the problem of unsupervised segmentation of tasks' execution in gestures. We will introduce latent variable models to tackle the problem, proposing further developments to combine such models with the DMP theory. We will review the Auto-Regressive Hidden Markov Model (AR-HMM) and test it on surgical-related datasets. Then, we will propose a generalization of the AR-HMM to general, non-linear, dynamics, showing that this results in a more accurate segmentation, with a less severe over-segmentation. Finally, we propose a further generalization of the AR-HMM that aims at integrating a DMP-like dynamic into the latent variable model

    Imitation learning with dynamic movement primitives

    Get PDF
    Scientists have been working on making robots act like human beings for decades. Therefore, how to imitate human motion has became a popular academic topic in recent years. Nevertheless, there are infinite trajectories between two points in three-dimensional space. As a result, imitation learning, which is an algorithm of teaching from demonstrations, is utilized for learning human motion. Dynamic Movement Primitives (DMPs) is a framework for learning trajectories from demonstrations. Likewise, DMPs can also learn orientations given rotational movement's data. Also, the simulation is implemented on Robot Baxter which has seven degrees of freedom (DOF) and the Inverse Kinematic (IK) solver has been pre-programmed in the robot, which means that it is able to control a robot system as long as both translational and rotational data are provided. Taking advantage of DMPs, complex motor movements can achieve task-oriented regeneration without parametric adjustment and consideration of instability. In this work, discrete DMPs is utilized as the framework of the whole system. The sample task is to move the objects into the target area using Robot Baxter which is a robotic arm-hand system. For more effective learning, a weighted learning algorithm called Local Weighted Regression (LWR) is implemented. To achieve the goal, the weights of basis functions are firstly trained from the demonstration using DMPs framework as well as LWR. Then, regard the weights as learning parameters and substitute the weights, desired initial state, desired goal state as well as time-correlated parameters into a DMPs framework. Ultimately, the translational and rotational data for a new task-specific trajectory is generated. The visualized results are simulated and shown in Virtual Robot Experimentation Platform (VREP). For accomplishing the tasks better, independent DMP is used for each translation or rotation axis. With relatively low computational cost, motions with relatively high complexity can also be achieved. Moreover, the task-oriented movements can always be successfully stabilized even though there are some spatial scaling and transformation as well as time scaling. Twelve videos are included in supplementary materials of this thesis. The videos mainly describe the simulation results of Robot Baxter shown on Virtual Robot Experimentation Platform (VREP). The specific information can be found in the appendix

    Locomoção bĂ­pede adaptativa a partir de uma Ășnica demonstração usando primitivas de movimento

    Get PDF
    Doutoramento em Engenharia EletrotĂ©cnicaEste trabalho aborda o problema de capacidade de imitação da locomoção humana atravĂ©s da utilização de trajetĂłrias de baixo nĂ­vel codificadas com primitivas de movimento e utilizĂĄ-las para depois generalizar para novas situaçÔes, partindo apenas de uma demonstração Ășnica. Assim, nesta linha de pensamento, os principais objetivos deste trabalho sĂŁo dois: o primeiro Ă© analisar, extrair e codificar demonstraçÔes efetuadas por um humano, obtidas por um sistema de captura de movimento de forma a modelar tarefas de locomoção bĂ­pede. Contudo, esta transferĂȘncia nĂŁo estĂĄ limitada Ă  simples reprodução desses movimentos, requerendo uma evolução das capacidades para adaptação a novas situaçÔes, assim como lidar com perturbaçÔes inesperadas. Assim, o segundo objetivo Ă© o desenvolvimento e avaliação de uma estrutura de controlo com capacidade de modelação das açÔes, de tal forma que a demonstração Ășnica apreendida possa ser modificada para o robĂŽ se adaptar a diversas situaçÔes, tendo em conta a sua dinĂąmica e o ambiente onde estĂĄ inserido. A ideia por detrĂĄs desta abordagem Ă© resolver o problema da generalização a partir de uma demonstração Ășnica, combinando para isso duas estruturas bĂĄsicas. A primeira consiste num sistema gerador de padrĂ”es baseado em primitivas de movimento utilizando sistemas dinĂąmicos (DS). Esta abordagem de codificação de movimentos possui propriedades desejĂĄveis que a torna ideal para geração de trajetĂłrias, tais como a possibilidade de modificar determinados parĂąmetros em tempo real, tais como a amplitude ou a frequĂȘncia do ciclo do movimento e robustez a pequenas perturbaçÔes. A segunda estrutura, que estĂĄ embebida na anterior, Ă© composta por um conjunto de osciladores acoplados em fase que organizam as açÔes de unidades funcionais de forma coordenada. Mudanças em determinadas condiçÔes, como o instante de contacto ou impactos com o solo, levam a modelos com mĂșltiplas fases. Assim, em vez de forçar o movimento do robĂŽ a situaçÔes prĂ©-determinadas de forma temporal, o gerador de padrĂ”es de movimento proposto explora a transição entre diferentes fases que surgem da interação do robĂŽ com o ambiente, despoletadas por eventos sensoriais. A abordagem proposta Ă© testada numa estrutura de simulação dinĂąmica, sendo que vĂĄrias experiĂȘncias sĂŁo efetuadas para avaliar os mĂ©todos e o desempenho dos mesmos.This work addresses the problem of learning to imitate human locomotion actions through low-level trajectories encoded with motion primitives and generalizing them to new situations from a single demonstration. In this line of thought, the main objectives of this work are twofold: The first is to analyze, extract and encode human demonstrations taken from motion capture data in order to model biped locomotion tasks. However, transferring motion skills from humans to robots is not limited to the simple reproduction, but requires the evaluation of their ability to adapt to new situations, as well as to deal with unexpected disturbances. Therefore, the second objective is to develop and evaluate a control framework for action shaping such that the single-demonstration can be modulated to varying situations, taking into account the dynamics of the robot and its environment. The idea behind the approach is to address the problem of generalization from a single-demonstration by combining two basic structures. The first structure is a pattern generator system consisting of movement primitives learned and modelled by dynamical systems (DS). This encoding approach possesses desirable properties that make them well-suited for trajectory generation, namely the possibility to change parameters online such as the amplitude and the frequency of the limit cycle and the intrinsic robustness against small perturbations. The second structure, which is embedded in the previous one, consists of coupled phase oscillators that organize actions into functional coordinated units. The changing contact conditions plus the associated impacts with the ground lead to models with multiple phases. Instead of forcing the robot’s motion into a predefined fixed timing, the proposed pattern generator explores transition between phases that emerge from the interaction of the robot system with the environment, triggered by sensor-driven events. The proposed approach is tested in a dynamics simulation framework and several experiments are conducted to validate the methods and to assess the performance of a humanoid robot

    Dynamiska rörelseprimitiver och förstÀrkande inlÀrning för att anpassa en lÀrd fÀrdighet

    Get PDF
    Traditionally robots have been preprogrammed to execute specific tasks. This approach works well in industrial settings where robots have to execute highly accurate movements, such as when welding. However, preprogramming a robot is also expensive, error prone and time consuming due to the fact that every features of the task has to be considered. In some cases, where a robot has to execute complex tasks such as playing the ball-in-a-cup game, preprogramming it might even be impossible due to unknown features of the task. With all this in mind, this thesis examines the possibility of combining a modern learning framework, known as Learning from Demonstrations (LfD), to first teach a robot how to play the ball-in-a-cup game by demonstrating the movement for the robot, and then have the robot to improve this skill by itself with subsequent Reinforcement Learning (RL). The skill the robot has to learn is demonstrated with kinesthetic teaching, modelled as a dynamic movement primitive, and subsequently improved with the RL algorithm Policy Learning by Weighted Exploration with the Returns. Experiments performed on the industrial robot KUKA LWR4+ showed that robots are capable of successfully learning a complex skill such as playing the ball-in-a-cup game.Traditionellt sett har robotar blivit förprogrammerade för att utföra specifika uppgifter. Detta tillvÀgagÄngssÀtt fungerar bra i industriella miljöer var robotar mÄste utföra mycket noggranna rörelser, som att svetsa. Förprogrammering av robotar Àr dock dyrt, felbenÀget och tidskrÀvande eftersom varje aspekt av uppgiften mÄste beaktas. Dessa nackdelar kan till och med göra det omöjligt att förprogrammera en robot att utföra komplexa uppgifter som att spela bollen-i-koppen spelet. Med allt detta i Ätanke undersöker den hÀr avhandlingen möjligheten att kombinera ett modernt ramverktyg, kallat inlÀrning av demonstrationer, för att lÀra en robot hur bollen-i-koppen-spelet ska spelas genom att demonstrera uppgiften för den och sedan ha roboten att sjÀlv förbÀttra sin inlÀrda uppgift genom att anvÀnda förstÀrkande inlÀrning. Uppgiften som roboten mÄste lÀra sig Àr demonstrerad med kinestetisk undervisning, modellerad som dynamiska rörelseprimitiver, och senare förbÀttrad med den förstÀrkande inlÀrningsalgoritmen Policy Learning by Weighted Exploration with the Returns. Experiment utförda pÄ den industriella KUKA LWR4+ roboten visade att robotar Àr kapabla att framgÄngsrikt lÀra sig spela bollen-i-koppen spelet

    An Affordable Upper-Limb Exoskeleton Concept for Rehabilitation Applications

    Get PDF
    In recent decades, many researchers have focused on the design and development of exoskeletons. Several strategies have been proposed to develop increasingly more efficient and biomimetic mechanisms. However, existing exoskeletons tend to be expensive and only available for a few people. This paper introduces a new gravity-balanced upper-limb exoskeleton suited for rehabilitation applications and designed with the main objective of reducing the cost of the components and materials. Regarding mechanics, the proposed design significantly reduces the motor torque requirements, because a high cost is usually associated with high-torque actuation. Regarding the electronics, we aim to exploit the microprocessor peripherals to obtain parallel and real-time execution of communication and control tasks without relying on expensive RTOSs. Regarding sensing, we avoid the use of expensive force sensors. Advanced control and rehabilitation features are implemented, and an intuitive user interface is developed. To experimentally validate the functionality of the proposed exoskeleton, a rehabilitation exercise in the form of a pick-and-place task is considered. Experimentally, peak torques are reduced by 89% for the shoulder and by 84% for the elbow

    Imitation Learning of Motion Coordination in Robots:a Dynamical System Approach

    Get PDF
    The ease with which humans coordinate all their limbs is fascinating. Such a simplicity is the result of a complex process of motor coordination, i.e. the ability to resolve the biomechanical redundancy in an efficient and repeatable manner. Coordination enables a wide variety of everyday human activities from filling in a glass with water to pair figure skating. Therefore, it is highly desirable to endow robots with similar skills. Despite the apparent diversity of coordinated motions, all of them share a crucial similarity: these motions are dictated by underlying constraints. The constraints shape the formation of the coordination patterns between the different degrees of freedom. Coordination constraints may take a spatio-temporal form; for instance, during bimanual object reaching or while catching a ball on the fly. They also may relate to the dynamics of the task; for instance, when one applies a specific force profile to carry a load. In this thesis, we develop a framework for teaching coordination skills to robots. Coordination may take different forms, here, we focus on teaching a robot intra-limb and bimanual coordination, as well as coordination with a human during physical collaborative tasks. We use tools from well-established domains of Bayesian semiparametric learning (Gaussian Mixture Models and Regression, Hidden Markov Models), nonlinear dynamics, and adaptive control. We take a biologically inspired approach to robot control. Specifically, we adopt an imitation learning perspective to skill transfer, that offers a seamless and intuitive way of capturing the constraints contained in natural human movements. As the robot is taught from motion data provided by a human teacher, we exploit evidence from human motor control of the temporal evolution of human motions that may be described by dynamical systems. Throughout this thesis, we demonstrate that the dynamical system view on movement formation facilitates coordination control in robots. We explain how our framework for teaching coordination to a robot is built up, starting from intra-limb coordination and control, moving to bimanual coordination, and finally to physical interaction with a human. The dissertation opens with the discussion of learning discrete task-level coordination patterns, such as spatio-temporal constraints emerging between the two arms in bimanual manipulation tasks. The encoding of bimanual constraints occurs at the task level and proceeds through a discretization of the task as sequences of bimanual constraints. Once the constraints are learned, the robot utilizes them to couple the two dynamical systems that generate kinematic trajectories for the hands. Explicit coupling of the dynamical systems ensures accurate reproduction of the learned constraints, and proves to be crucial for successful accomplishment of the task. In the second part of this thesis, we consider learning one-arm control policies. We present an approach to extracting non-linear autonomous dynamical systems from kinematic data of arbitrary point-to-point motions. The proposed method aims to tackle the fundamental questions of learning robot coordination: (i) how to infer a motion representation that captures a multivariate coordination pattern between degrees of freedom and that generalizes this pattern to unseen contexts; (ii) whether the policy learned directly from demonstrations can provide robustness against spatial and temporal perturbations. Finally, we demonstrate that the developed dynamical system approach to coordination may go beyond kinematic motion learning. We consider physical interactions between a robot and a human in situations where they jointly perform manipulation tasks; in particular, the problem of collaborative carrying and positioning of a load. We extend the approach proposed in the second part of this thesis to incorporate haptic information into the learning process. As a result, the robot adapts its kinematic motion plan according to human intentions expressed through the haptic signals. Even after the robot has learned the task model, the human still remains a complex contact environment. To ensure robustness of the robot behavior in the face of the variability inherent to human movements, we wrap the learned task model in an adaptive impedance controller with automatic gain tuning. The techniques, developed in this thesis, have been applied to enable learning of unimanual and bimanual manipulation tasks on the robotics platforms HOAP-3, KATANA, and i-Cub, as well as to endow a pair of simulated robots with the ability to perform a manipulation task in the physical collaboration
    • 

    corecore