8 research outputs found

    Robot Performing Peg-in-Hole Operations by Learning from Human Demonstration

    Get PDF
    This paper presents a novel approach for a robot to conduct assembly tasks, namely robot learning from human demonstrations. The learning of robotic assembly task is divided into two phases: teaching and reproduction. During the teaching phase, a wrist camera is used to scan the object on the workbench and extract its SIFT feature. The human demonstrator teaches the robot to grasp the object from the effective position and orientation. During the reproduction phase, the robot uses the learned knowledge to reproduce the grasping manipulation autonomously. The robustness of the robotic assembly system is evaluated through a series of grasping trials. The dual-arm Baxter robot is used to perform the Peg-in-Hole task by using the proposed approach. Experimental results show that the robot is able to accomplish assembly task by learning from human demonstration without traditional dedicated programming

    Robot Learning from Demonstration in Robotic Assembly: A Survey

    Get PDF
    Learning from demonstration (LfD) has been used to help robots to implement manipulation tasks autonomously, in particular, to learn manipulation behaviors from observing the motion executed by human demonstrators. This paper reviews recent research and development in the field of LfD. The main focus is placed on how to demonstrate the example behaviors to the robot in assembly operations, and how to extract the manipulation features for robot learning and generating imitative behaviors. Diverse metrics are analyzed to evaluate the performance of robot imitation learning. Specifically, the application of LfD in robotic assembly is a focal point in this paper

    Robot Learning From Human Observation Using Deep Neural Networks

    Get PDF
    Industrial robots have gained traction in the last twenty years and have become an integral component in any sector empowering automation. Specifically, the automotive industry implements a wide range of industrial robots in a multitude of assembly lines worldwide. These robots perform tasks with the utmost level of repeatability and incomparable speed. It is that speed and consistency that has always made the robotic task an upgrade over the same task completed by a human. The cost savings is a great return on investment causing corporations to automate and deploy robotic solutions wherever feasible. The cost to commission and set up is the largest deterring factor in any decision regarding robotics and automation. Currently, robots are traditionally programmed by robotic technicians, and this function is carried out in a manual process in a well-structured environment. This thesis dives into the option of eliminating the programming and commissioning portion of the robotic integration. If the environment is dynamic and can undergo various iterations of parts, changes in lighting, and part placement in the cell, then the robot will struggle to function because it is not capable of adapting to these variables. If a couple of cameras can be introduced to help capture the operator’s motions and part variability, then Learning from Demonstration (LfD) can be implemented to potentially solve this prevalent issue in today’s automotive culture. With assistance from machine learning algorithms, deep neural networks, and transfer learning technology, LfD can strive and become a viable solution. This system was developed with a robotic cell that can learn from demonstration (LfD). The proposed approach is based on computer vision to observe human actions and deep learning to perceive the demonstrator’s actions and manipulated objects

    Robot Learning Assembly Tasks from Human Demonstrations

    Get PDF
    The industry robots are widely deployed in the assembly and production lines as they are efficient in performing highly repetitive tasks. They are mainly position-controlled and pre-programmed to work in well-structured environments. However, they cannot deal with dynamical changes and unexpected events in their operations as they do not have sufficient sensing and learning capabilities. It remains a big challenge for robotic assembly operations to be conducted in unstructured environments today. This thesis research focuses on the development of robot learning from demonstration (LfD) for the robotic assembly task by using visual teaching. Firstly, the human kinesthetic teaching method is adopted for robot to learn an effective grasping skill in unstructured environment. During this teaching process, the robot learns the object's SIFT feature and grasping pose from human demonstrations. Secondly, a novel skeleton-joint mapping framework is proposed for robot learning from human demonstrations. The mapping algorithm transfers the human motion from the human joint space to the robot motor space so that the robot can be taught intuitively in a remote place. Thirdly, a novel visual-mapping demonstration framework is built for robot learning assembly tasks, in which, the demonstrator is able to teach the robot with feedback in real-time. Gaussian Mixture Model and Gaussian Mixture Regression are used to encode the learned skills for the robot. Finally, The effectiveness of the approach is evaluated with practical assembly tasks by the Baxter robot. The significance of this thesis research is on its comprehensive insight of robot learning from demonstration for assembly tasks. The proposed LfD paradigm has the potential to effectively transfer human skills to robots both in industrial and domestic environments. It paves the way for general public to use the robots without the need of programming skills
    corecore