2,667 research outputs found

    Development of Multi-Robotic Arm System for Sorting System Using Computer Vision

    Get PDF
    This paper develops a multi-robotic arm system and a stereo vision system to sort objects in the right position according to size and shape attributes. The robotic arm system consists of one master and three slave robots associated with three conveyor belts. Each robotic arm is controlled by a robot controller based on a microcontroller. A master controller is used for the vision system and communicating with slave robotic arms using the Modbus RTU protocol through an RS485 serial interface. The stereo vision system is built to determine the 3D coordinates of the object. Instead of rebuilding the entire disparity map, which is computationally expensive, the centroids of the objects in the two images are calculated to determine the depth value. After that, we can calculate the 3D coordinates of the object by using the formula of the pinhole camera model. Objects are picked up and placed on a conveyor branch according to their shape. The conveyor transports the object to the location of the slave robot. Based on the size attribute that the slave robot receives from the master, the object is picked and placed in the right position. Experiment results reveal the effectiveness of the system. The system can be used in industrial processes to reduce the required time and improve the performance of the production line

    Learning Multi-step Robotic Manipulation Tasks through Visual Planning

    Get PDF
    Multi-step manipulation tasks in unstructured environments are extremely challenging for a robot to learn. Such tasks interlace high-level reasoning that consists of the expected states that can be attained to achieve an overall task and low-level reasoning that decides what actions will yield these states. A model-free deep reinforcement learning method is proposed to learn multi-step manipulation tasks. This work introduces a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel image input at real-time speeds (20ms). The proposed model architecture achieved a state-of-the-art accuracy on three standard grasping datasets. The adaptability of the proposed approach is demonstrated by directly transferring the trained model to a 7 DoF robotic manipulator with a grasp success rate of 95.4% and 93.0% on novel household and adversarial objects, respectively. A novel Robotic Manipulation Network (RoManNet) is introduced, which is a vision-based model architecture, to learn the action-value functions and predict manipulation action candidates. A Task Progress based Gaussian (TPG) reward function is defined to compute the reward based on actions that lead to successful motion primitives and progress towards the overall task goal. To balance the ratio of exploration/exploitation, this research introduces a Loss Adjusted Exploration (LAE) policy that determines actions from the action candidates according to the Boltzmann distribution of loss estimates. The effectiveness of the proposed approach is demonstrated by training RoManNet to learn several challenging multi-step robotic manipulation tasks in both simulation and real-world. Experimental results show that the proposed method outperforms the existing methods and achieves state-of-the-art performance in terms of success rate and action efficiency. The ablation studies show that TPG and LAE are especially beneficial for tasks like multiple block stacking

    Interactive Task Encoding System for Learning-from-Observation

    Full text link
    We introduce a practical pipeline that interactively encodes multimodal human demonstrations for robot teaching. This pipeline is designed as an input system for a framework called Learning-from-Observation (LfO), which aims to program household robots with manipulative tasks through few-shots human demonstration without coding. While most previous LfO systems run with visual demonstration, recent research on robot teaching has shown the effectiveness of verbal instruction in making recognition robust and teaching interactive. To the best of our knowledge, however, no LfO system has yet been proposed that utilizes both verbal instruction and interaction, namely \textit{multimodal LfO}. This paper proposes the interactive task encoding system (ITES) as an input pipeline for multimodal LfO. ITES assumes that the user teaches step-by-step, pausing hand movements in order to match the granularity of human instructions with the granularity of robot execution. ITES recognizes tasks based on step-by-step verbal instructions that accompany the hand movements. Additionally, the recognition is made robust through interactions with the user. We test ITES on a real robot and show that the user can successfully teach multiple operations through multimodal demonstrations. The results suggest the usefulness of ITES for multimodal LfO. The source code is available at https://github.com/microsoft/symbolic-robot-teaching-interface.Comment: 7 pages, 10 figures. Last updated January 24st, 202

    Multi-modal interface for offline robot programming

    Get PDF
    This thesis presents an approach for improving robot offline programming using input methods based on the human natural skills. The approach is focused to teach basic assembly and manipulation operations using a pair of industrial robots in an existing simulation environment and is meant to be improved in future works, that are also proposed in this thesis. In order to develop this approach, and regarding the available resources, an Add-In for the simulation and offline programming software RobotStudio was developed. This Add-In combines human pose, a graphical user interface and optionally speech to teach the robot a sequence of targets, along with the simulation environment, to automatically generate instructions. Two different kinds of sensors, Kinect and Leap Motion Sensor have been evaluated based on references in order to select the most suitable one for the implementation of this work. The executions of the programmed instructions have been evaluated in simulation.Este trabajo presenta una propuesta para mejorar al programación de robots fuera de línea usando métodos de entrada basados en habilidades humanas naturales. La propuesta se enfoca en enseñar operaciones básicas de ensamblaje y manipulación, utilizando un par de robots industriales en un entorno de simulación ya existente y se dispone para ser mejorado en trabajos futuros, los cuales también se proponen en este trabajo. Con el fin de desarrollar esta propuesta y teniendo en cuenta los recursos disponibles, se ha desarrollado un Add-In para el programa de simulación y programación fuera de línea Robot Studio. Este Add-In combina pose humana, una interfaz gráfica de usuario y opcionalmente habla para enseñar al robot una secuencia de objetivos junto con el entorno de simulación para automáticamente generar instrucciones. Dos diferentes tipos de sensores, Kinect y Leap Motion Sensor han sido evaluados en base a referencias para seleccionar el más adecuado para la implementación de este trabajo. Las ejecuciones de las instrucciones programadas han sido evaluadas en simulación.Máster Universitario en Ingeniería Industrial (M141

    Haptic feedback in teleoperation in Micro-and Nano-Worlds.

    No full text
    International audienceRobotic systems have been developed to handle very small objects, but their use remains complex and necessitates long-duration training. Simulators, such as molecular simulators, can provide access to large amounts of raw data, but only highly trained users can interpret the results of such systems. Haptic feedback in teleoperation, which provides force-feedback to an operator, appears to be a promising solution for interaction with such systems, as it allows intuitiveness and flexibility. However several issues arise while implementing teleoperation schemes at the micro-nanoscale, owing to complex force-fields that must be transmitted to users, and scaling differences between the haptic device and the manipulated objects. Major advances in such technology have been made in recent years. This chapter reviews the main systems in this area and highlights how some fundamental issues in teleoperation for micro- and nano-scale applications have been addressed. The chapter considers three types of teleoperation, including: (1) direct (manipulation of real objects); (2) virtual (use of simulators); and (3) augmented (combining real robotic systems and simulators). Remaining issues that must be addressed for further advances in teleoperation for micro-nanoworlds are also discussed, including: (1) comprehension of phenomena that dictate very small object (< 500 micrometers) behavior; and (2) design of intuitive 3-D manipulation systems. Design guidelines to realize an intuitive haptic feedback teleoperation system at the micro-nanoscale level are proposed
    corecore