10,399 research outputs found

    Learning Task Constraints from Demonstration for Hybrid Force/Position Control

    Full text link
    We present a novel method for learning hybrid force/position control from demonstration. We learn a dynamic constraint frame aligned to the direction of desired force using Cartesian Dynamic Movement Primitives. In contrast to approaches that utilize a fixed constraint frame, our approach easily accommodates tasks with rapidly changing task constraints over time. We activate only one degree of freedom for force control at any given time, ensuring motion is always possible orthogonal to the direction of desired force. Since we utilize demonstrated forces to learn the constraint frame, we are able to compensate for forces not detected by methods that learn only from the demonstrated kinematic motion, such as frictional forces between the end-effector and the contact surface. We additionally propose novel extensions to the Dynamic Movement Primitive (DMP) framework that encourage robust transition from free-space motion to in-contact motion in spite of environment uncertainty. We incorporate force feedback and a dynamically shifting goal to reduce forces applied to the environment and retain stable contact while enabling force control. Our methods exhibit low impact forces on contact and low steady-state tracking error.Comment: Under revie

    Robot training using system identification

    Get PDF
    This paper focuses on developing a formal, theory-based design methodology to generate transparent robot control programs using mathematical functions. The research finds its theoretical roots in robot training and system identification techniques such as Armax (Auto-Regressive Moving Average models with eXogenous inputs) and Narmax (Non-linear Armax). These techniques produce linear and non-linear polynomial functions that model the relationship between a robot’s sensor perception and motor response. The main benefits of the proposed design methodology, compared to the traditional robot programming techniques are: (i) It is a fast and efficient way of generating robot control code, (ii) The generated robot control programs are transparent mathematical functions that can be used to form hypotheses and theoretical analyses of robot behaviour, and (iii) It requires very little explicit knowledge of robot programming where end-users/programmers who do not have any specialised robot programming skills can nevertheless generate task-achieving sensor-motor couplings. The nature of this research is concerned with obtaining sensor-motor couplings, be it through human demonstration via the robot, direct human demonstration, or other means. The viability of our methodology has been demonstrated by teaching various mobile robots different sensor-motor tasks such as wall following, corridor passing, door traversal and route learning

    Learning by observation through system identification

    Get PDF
    In our previous works, we present a new method to program mobile robots —“code identification by demonstration”— based on algorithmically transferring human behaviours to robot control code using transparent mathematical functions. Our approach has three stages: i) first extracting the trajectory of the desired behaviour by observing the human, ii) making the robot follow the human trajectory blindly to log the robot’s own perception perceived along that trajectory, and finally iii) linking the robot’s perception to the desired behaviour to obtain a generalised, sensor-based model. So far we used an external, camera based motion tracking system to log the trajectory of the human demonstrator during his initial demonstration of the desired motion. Because such tracking systems are complicated to set up and expensive, we propose an alternative method to obtain trajectory information, using the robot’s own sensor perception. In this method, we train a mathematical polynomial using the NARMAX system identification methodology which maps the position of the “red jacket” worn by the demonstrator in the image captured by the robot’s camera, to the relative position of the demonstrator in the real world according to the robot. We demonstrate the viability of this approach by teaching a Scitos G5 mobile robot to achieve door traversal behaviour

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Full text link
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Get PDF
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin

    Geometry-aware Manipulability Learning, Tracking and Transfer

    Full text link
    Body posture influences human and robots performance in manipulation tasks, as appropriate poses facilitate motion or force exertion along different axes. In robotics, manipulability ellipsoids arise as a powerful descriptor to analyze, control and design the robot dexterity as a function of the articulatory joint configuration. This descriptor can be designed according to different task requirements, such as tracking a desired position or apply a specific force. In this context, this paper presents a novel \emph{manipulability transfer} framework, a method that allows robots to learn and reproduce manipulability ellipsoids from expert demonstrations. The proposed learning scheme is built on a tensor-based formulation of a Gaussian mixture model that takes into account that manipulability ellipsoids lie on the manifold of symmetric positive definite matrices. Learning is coupled with a geometry-aware tracking controller allowing robots to follow a desired profile of manipulability ellipsoids. Extensive evaluations in simulation with redundant manipulators, a robotic hand and humanoids agents, as well as an experiment with two real dual-arm systems validate the feasibility of the approach.Comment: Accepted for publication in the Intl. Journal of Robotics Research (IJRR). Website: https://sites.google.com/view/manipulability. Code: https://github.com/NoemieJaquier/Manipulability. 24 pages, 20 figures, 3 tables, 4 appendice

    Robot programming by demonstration through system identification

    Get PDF
    Increasingly, personalised robots — robots especially designed and programmed for an individual’s needs and preferences — are being used to support humans in their daily lives, most notably in the area of service robotics. Arguably, the closer the robot is programmed to the individual’s needs, the more useful it is, and we believe that giving people the opportunity to program their own robots, rather than programming robots for them, will push robotics research one step further in the personalised robotics field. However, traditional robot programming techniques require specialised technical skills from different disciplines and it is not reasonable to expect end-users to have these skills. In this paper, we therefore present a new method of obtaining robot control code — programming by demonstration through system identification which algorithmically and automatically transfers human behaviours into robot control code, using transparent, analysable mathematical functions. Besides providing a simple means of generating perception-action mappings, they have the additional advantage that can also be used to form hypotheses and theoretical analysis of robot behaviour. We demonstrate the viability of this approach by teaching a Scitos G5 mobile robot to achieve wall following and corridor passing behaviours
    • …
    corecore