6 research outputs found

    Robot Performing Peg-in-Hole Operations by Learning from Human Demonstration

    Get PDF
    This paper presents a novel approach for a robot to conduct assembly tasks, namely robot learning from human demonstrations. The learning of robotic assembly task is divided into two phases: teaching and reproduction. During the teaching phase, a wrist camera is used to scan the object on the workbench and extract its SIFT feature. The human demonstrator teaches the robot to grasp the object from the effective position and orientation. During the reproduction phase, the robot uses the learned knowledge to reproduce the grasping manipulation autonomously. The robustness of the robotic assembly system is evaluated through a series of grasping trials. The dual-arm Baxter robot is used to perform the Peg-in-Hole task by using the proposed approach. Experimental results show that the robot is able to accomplish assembly task by learning from human demonstration without traditional dedicated programming

    Robot Learning from Demonstration in Robotic Assembly: A Survey

    Get PDF
    Learning from demonstration (LfD) has been used to help robots to implement manipulation tasks autonomously, in particular, to learn manipulation behaviors from observing the motion executed by human demonstrators. This paper reviews recent research and development in the field of LfD. The main focus is placed on how to demonstrate the example behaviors to the robot in assembly operations, and how to extract the manipulation features for robot learning and generating imitative behaviors. Diverse metrics are analyzed to evaluate the performance of robot imitation learning. Specifically, the application of LfD in robotic assembly is a focal point in this paper

    Robot Learning From Human Observation Using Deep Neural Networks

    Get PDF
    Industrial robots have gained traction in the last twenty years and have become an integral component in any sector empowering automation. Specifically, the automotive industry implements a wide range of industrial robots in a multitude of assembly lines worldwide. These robots perform tasks with the utmost level of repeatability and incomparable speed. It is that speed and consistency that has always made the robotic task an upgrade over the same task completed by a human. The cost savings is a great return on investment causing corporations to automate and deploy robotic solutions wherever feasible. The cost to commission and set up is the largest deterring factor in any decision regarding robotics and automation. Currently, robots are traditionally programmed by robotic technicians, and this function is carried out in a manual process in a well-structured environment. This thesis dives into the option of eliminating the programming and commissioning portion of the robotic integration. If the environment is dynamic and can undergo various iterations of parts, changes in lighting, and part placement in the cell, then the robot will struggle to function because it is not capable of adapting to these variables. If a couple of cameras can be introduced to help capture the operator’s motions and part variability, then Learning from Demonstration (LfD) can be implemented to potentially solve this prevalent issue in today’s automotive culture. With assistance from machine learning algorithms, deep neural networks, and transfer learning technology, LfD can strive and become a viable solution. This system was developed with a robotic cell that can learn from demonstration (LfD). The proposed approach is based on computer vision to observe human actions and deep learning to perceive the demonstrator’s actions and manipulated objects

    Technologies for the Fast Set-Up of Automated Assembly Processes

    No full text
    corecore