9 research outputs found

    Execution fault recovery in robot programming by demonstration using multiple models

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Deformable object (e.g., clothes) manipulation by a robot in interaction with a human being presents several interesting challenges. Due to texture and deformability, the object can get hooked in the human limbs. Moreover, the human can change their limbs position and curvature, which require changes in the paths to be followed by the robot. To help solve these problems, in this paper we propose a technique of learning by demonstration able to adapt to changes in position and curvature of the object (human limb) and recover from execution faults (hooks). The technique is tested using simulations, but with data obtained from a real robotPeer ReviewedPostprint (author's final draft

    Semantic Robot Programming for Taskable Goal-Directed Manipulation

    Full text link
    Autonomous robots have the potential to assist people to be more productive in factories, homes, hospitals, and similar environments. Unlike traditional industrial robots that are pre-programmed for particular tasks in controlled environments, modern autonomous robots should be able to perform arbitrary user-desired tasks. Thus, it is beneficial to provide pathways to enable users to program an arbitrary robot to perform an arbitrary task in an arbitrary world. Advances in robot Programming by Demonstration (PbD) has made it possible for end-users to program robot behavior for performing desired tasks through demonstrations. However, it still remains a challenge for users to program robot behavior in a generalizable, performant, scalable, and intuitive manner. In this dissertation, we address the problem of robot programming by demonstration in a declarative manner by introducing the concept of Semantic Robot Programming (SRP). In SRP, we focus on addressing the following challenges for robot PbD: 1) generalization across robots, tasks, and worlds, 2) robustness under partial observations of cluttered scenes, 3) efficiency in task performance as the workspace scales up, and 4) feasibly intuitive modalities of interaction for end-users to demonstrate tasks to robots. Through SRP, our objective is to enable an end-user to intuitively program a mobile manipulator by providing a workspace demonstration of the desired goal scene. We use a scene graph to semantically represent conditions on the current and goal states of the world. To estimate the scene graph given raw sensor observations, we bring together discriminative object detection and generative state estimation for the inference of object classes and poses. The proposed scene estimation method outperformed the state of the art in cluttered scenes. With SRP, we successfully enabled users to program a Fetch robot to set up a kitchen tray on a cluttered tabletop in 10 different start and goal settings. In order to scale up SRP from tabletop to large scale, we propose Contextual-Temporal Mapping (CT-Map) for semantic mapping of large scale scenes given streaming sensor observations. We model the semantic mapping problem via a Conditional Random Field (CRF), which accounts for spatial dependencies between objects. Over time, object poses and inter-object spatial relations can vary due to human activities. To deal with such dynamics, CT-Map maintains the belief over object classes and poses across an observed environment. We present CT-Map semantically mapping cluttered rooms with robustness to perceptual ambiguities, demonstrating higher accuracy on object detection and 6 DoF pose estimation compared to state-of-the-art neural network-based object detector and commonly adopted 3D registration methods. Towards SRP at the building scale, we explore notions of Generalized Object Permanence (GOP) for robots to search for objects efficiently. We state the GOP problem as the prediction of where an object can be located when it is not being directly observed by a robot. We model object permanence via a factor graph inference model, with factors representing long-term memory, short-term memory, and common sense knowledge over inter-object spatial relations. We propose the Semantic Linking Maps (SLiM) model to maintain the belief over object locations while accounting for object permanence through a CRF. Based on the belief maintained by SLiM, we present a hybrid object search strategy that enables the Fetch robot to actively search for objects on a large scale, with a higher search success rate and less search time compared to state-of-the-art search methods.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155073/1/zengzhen_1.pd

    Planiranje robotskog djelovanja zasnovano na tumačenju prostornih struktura

    Get PDF
    Robot je programabilan mehanizam čije se djelovanje temelji na upravljačkim algoritmima. Prilikom rada u nestrukturiranoj okolini upravljački algoritmi postaju eksplicitne funkcije položaja i vremena u povratnoj vezi sa stanjem okoline. Obradu podataka iz okoline te zaključivanje o odgovarajućem djelovanju robota moguće je temeljiti na principima strojnoga učenja. Predloženo istraživanje bavi se razvojem modela učenja i planiranja djelovanja robota. Proces učenja temelji se na novoj umjetnoj neuronskoj mreži klasifikacijom prostornih struktura. Pojam prostorne strukture podrazumijeva interpretaciju rasporeda poznatih objekata u ravnini koje robot percipira vizijskim sustavom. Umjetna neuronska mreža za klasifikaciju i prepoznavanje prostornih struktura zasniva se na teoriji adaptivne rezonancije. Planiranje djelovanja robota temeljno je na usporednoj evoluciji rješenja razvojem novoga genetskoga algoritma. Genetski algoritam kao osnovni cilj ima prostornu pretvorbu neuređenoga stanja objekata u uređeno. Izvorni znanstveni doprinos rada očituje se u sljedećem: 1) Samoorganizirajuća umjetna neuronska mreža za klasifikaciju i prepoznavanje prostornih struktura zasnovana na teoriji adaptivne rezonancije, koju odlikuje nova dvorazinska klasifikacija po obliku i rasporedu objekata te mehanizam asocijativnoga povezivanja neuređenoga skupa objekata s uređenim i 2) Novi genetski algoritam za planiranje robotskoga djelovanja u nestrukturiranoj radnoj okolini karakteriziran usporednom evolucijskom strategijom za pronalaženje rješenja, s ciljem prostorne pretvorbe neuređenoga stanja objekata u uređeno

    Robot Learning from Human Demonstrations for Human-Robot Synergy

    Get PDF
    Human-robot synergy enables new developments in industrial and assistive robotics research. In recent years, collaborative robots can work together with humans to perform a task, while sharing the same workplace. However, the teachability of robots is a crucial factor, in order to establish the role of robots as human teammates. Robots require certain abilities, such as easily learning diversified tasks and adapting to unpredicted events. The most feasible method, which currently utilizes human teammate to teach robots how to perform a task, is the Robot Learning from Demonstrations (RLfD). The goal of this method is to allow non-expert users to a programa a robot by simply guiding the robot through a task. The focus of this thesis is on the development of a novel framework for Robot Learning from Demonstrations that enhances the robotsa abilities to learn and perform the sequences of actions for object manipulation tasks (high-level learning) and, simultaneously, learn and adapt the necessary trajectories for object manipulation (low-level learning). A method that automatically segments demonstrated tasks into sequences of actions is developed in this thesis. Subsequently, the generated sequences of actions are employed by a Reinforcement Learning (RL) from human demonstration approach to enable high-level robot learning. The low-level robot learning consists of a novel method that selects similar demonstrations (in case of multiple demonstrations of a task) and the Gaussian Mixture Model (GMM) method. The developed robot learning framework allows learning from single and multiple demonstrations. As soon as the robot has the knowledge of a demonstrated task, it can perform the task in cooperation with the human. However, the need for adaptation of the learned knowledge may arise during the human-robot synergy. Firstly, Interactive Reinforcement Learning (IRL) is employed as a decision support method to predict the sequence of actions in real-time, to keep the human in the loop and to enable learning the usera s preferences. Subsequently, a novel method that modifies the learned Gaussian Mixture Model (m-GMM) is developed in this thesis. This method allows the robot to cope with changes in the environment, such as objects placed in a different from the demonstrated pose or obstacles, which may be introduced by the human teammate. The modified Gaussian Mixture Model is further used by the Gaussian Mixture Regression (GMR) to generate a trajectory, which can efficiently control the robot. The developed framework for Robot Learning from Demonstrations was evaluated in two different robotic platforms: a dual-arm industrial robot and an assistive robotic manipulator. For both robotic platforms, small studies were performed for industrial and assistive manipulation tasks, respectively. Several Human-Robot Interaction (HRI) methods, such as kinesthetic teaching, gamepad or a hands-freea via head gestures, were used to provide the robot demonstrations. The a hands-freea HRI enables individuals with severe motor impairments to provide a demonstration of an assistive task. The experimental results demonstrate the potential of the developed robot learning framework to enable continuous humana robot synergy in industrial and assistive applications

    Interactive Learning of Probabilistic Decision Making by Service Robots with Multiple Skill Domains

    Get PDF
    This thesis makes a contribution to autonomous service robots, centered around two aspects. The first is modeling decision making in the face of incomplete information on top of diverse basic skills of a service robot. Second, based on such a model, it is investigated, how to transfer complex decision-making knowledge into the system. Interactive learning, naturally from both demonstrations of human teachers and in interaction with objects, yields decision-making models applicable by the robot
    corecore