15 research outputs found

    Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks

    Full text link
    In order to robustly execute a task under environmental uncertainty, a robot needs to be able to reactively adapt to changes arising in its environment. The environment changes are usually reflected in deviation from expected sensory traces. These deviations in sensory traces can be used to drive the motion adaptation, and for this purpose, a feedback model is required. The feedback model maps the deviations in sensory traces to the motion plan adaptation. In this paper, we develop a general data-driven framework for learning a feedback model from demonstrations. We utilize a variant of a radial basis function network structure --with movement phases as kernel centers-- which can generally be applied to represent any feedback models for movement primitives. To demonstrate the effectiveness of our framework, we test it on the task of scraping on a tilt board. In this task, we are learning a reactive policy in the form of orientation adaptation, based on deviations of tactile sensor traces. As a proof of concept of our method, we provide evaluations on an anthropomorphic robot. A video demonstrating our approach and its results can be seen in https://youtu.be/7Dx5imy1KcwComment: 8 pages, accepted to be published at the International Conference on Robotics and Automation (ICRA) 201

    Learning Task Constraints from Demonstration for Hybrid Force/Position Control

    Full text link
    We present a novel method for learning hybrid force/position control from demonstration. We learn a dynamic constraint frame aligned to the direction of desired force using Cartesian Dynamic Movement Primitives. In contrast to approaches that utilize a fixed constraint frame, our approach easily accommodates tasks with rapidly changing task constraints over time. We activate only one degree of freedom for force control at any given time, ensuring motion is always possible orthogonal to the direction of desired force. Since we utilize demonstrated forces to learn the constraint frame, we are able to compensate for forces not detected by methods that learn only from the demonstrated kinematic motion, such as frictional forces between the end-effector and the contact surface. We additionally propose novel extensions to the Dynamic Movement Primitive (DMP) framework that encourage robust transition from free-space motion to in-contact motion in spite of environment uncertainty. We incorporate force feedback and a dynamically shifting goal to reduce forces applied to the environment and retain stable contact while enabling force control. Our methods exhibit low impact forces on contact and low steady-state tracking error.Comment: Under revie

    A knowledge-based framework for task automation in surgery

    Get PDF
    Robotic surgery has significantly improved the quality of surgical procedures. In the past, researches have been focused on automating simple surgical actions, however there exists no scalable framework for automation in surgery. In this paper, we present a knowledge-based modular framework for the automation of articulated surgical tasks, for example, with multiple coordinated actions. The framework is consisted of ontology, providing entities for surgical automation and rules for task planning, and \u201cdynamic movement primitives\u201d as adaptive motion planner as to replicate the dexterity of surgeons. To validate our framework, we chose a paradigmatic scenario of a peg-and-ring task, a standard training exercise for novice surgeons which presents many challenges of real surgery, e.g. grasping and transferring. Experiments show the validity of the framework and its adaptability to faulty events. The modular architecture is expected to generalize to different tasks and platforms

    Learning Barrier Functions for Constrained Motion Planning with Dynamical Systems

    Full text link
    Stable dynamical systems are a flexible tool to plan robotic motions in real-time. In the robotic literature, dynamical system motions are typically planned without considering possible limitations in the robot's workspace. This work presents a novel approach to learn workspace constraints from human demonstrations and to generate motion trajectories for the robot that lie in the constrained workspace. Training data are incrementally clustered into different linear subspaces and used to fit a low dimensional representation of each subspace. By considering the learned constraint subspaces as zeroing barrier functions, we are able to design a control input that keeps the system trajectory within the learned bounds. This control input is effectively combined with the original system dynamics preserving eventual asymptotic properties of the unconstrained system. Simulations and experiments on a real robot show the effectiveness of the proposed approach

    Dynamic Movement Primitives: Volumetric Obstacle Avoidance Using Dynamic Potential Functions

    Get PDF
    Obstacle avoidance for DMPs is still a challenging problem. In our previous work, we proposed a framework for obstacle avoidance based on superquadric potential functions to represent volumes. In this work, we extend our previous work to include the velocity of the trajectory in the definition of the potential. Our formulations guarantee smoother behavior with respect to state-of-the-art point-like methods. Moreover, our new formulation allows to obtain a smoother behavior in proximity of the obstacle than when using a static (i.e. velocity independent) potential. We validate our framework for obstacle avoidance in a simulated multi-robot scenario and with different real robots: a pick-and-place task for an industrial manipulator and a surgical robot to show scalability; and navigation with a mobile robot in dynamic environment.Comment: Preprint for Journal of Intelligent and Robotic System

    Robot skill learning system of multi-space fusion based on dynamic movement primitives and adaptive neural network control

    Get PDF
    This article develops a robot skill learning system with multi-space fusion, simultaneously considering motion/stiffness generation and trajectory tracking. To begin with, surface electromyography (sEMG) signals from the human arm is captured based on the MYO armband to estimate endpoint stiffness. Gaussian Process Regression (GPR) is combined with dynamic movement primitive (DMP) to extract more skills features from multi-demonstrations. Then, the traditional DMP formulation is improved based on the Riemannian metric to encode the robot's quaternions with non-Euclidean properties. Furthermore, an adaptive neural network (NN)-based finite-time admittance controller is designed to track the trajectory generated by the motion model and to reflect the learned stiffness characteristics. In this controller, a radial basis function neural network (RBFNN) is employed to compensate for the uncertainty of the robot dynamics. Finally, experimental validation is conducted using the ROKAE collaborative robot, confirming the effectiveness of the proposed approach. In summary, the presented framework is suitable for human-robot skill transfer method that require simultaneous consideration of position and stiffness in Euclidean space, as well as orientation on Riemannian manifolds
    corecore