306 research outputs found

    Visual servoing of a five-bar linkage mechanism /

    Get PDF
    This document is the written product of the graduation project developed: Visual Servoing of a Five-bar Linkage Mechanism. This project means to venture into the fields of a method of control, with visual feedback, known as Visual Servoing. The contents of this document show a summary of all the theory taken into account to realize the project. They also shows how other people have approached this method. These pages present the project establishing its aims, the importance of its realization, a detailed description of how it was carried out - including experiments and obstacles, - and the results obtained. This document also informs how is this work of use and what can be done from it. In the same way, here are consigned the books, articles, and works consulted in the way, which in their own pages provide a large quantity of references and information.Incluye referencias bibliográfica

    High-precision grasping and placing for mobile robots

    Get PDF
    This work presents a manipulation system for multiple labware in life science laboratories using the H20 mobile robots. The H20 robot is equipped with the Kinect V2 sensor to identify and estimate the position of the required labware on the workbench. The local features recognition based on SURF algorithm is used. The recognition process is performed for the labware to be grasped and for the workbench holder. Different grippers and labware containers are designed to manipulate different weights of labware and to realize a safe transportation

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Facilitating Programming of Vision-Equipped Robots through Robotic Skills and Projection Mapping

    Get PDF

    A robotic engine assembly pick-place system based on machine learning

    Get PDF
    Industrial revolution brought humans and machines together in building a better future. Where in one hand there is need to replace the repetitive jobs with machines to increase efficiency and volume of production, on the other hand intelligent and autonomous machines have still a long way to go to achieve dexterity of a human. The current scenario requires a system which can utilise best of both the human and the machine. This thesis studies a industrial use case scenario where human-machine combine their skills to build an autonomous pick place system. This study takes a small step towards the human-robot consortium primarily focusing on developing a vision based system for object detection followed by a manipulator pick place operation. This thesis can be divided into two parts : 1. Scene analysis, where a Convolutional Neural Network (CNN) is used for object detection followed by generation of grasping points using object edge image and an algorithm developed during this thesis. 2. Implementation, it focuses on motion generation while taking care of external disturbances to perform successful pick-place operation. In addition human involvement is required which includes teaching trajectory points for the robot to follow. This trajectory is used to generate image data-set for a new object type and thereafter generating new object detection model. The author primarily focuses on building a system framework where the complexities related to robot programming such as generating trajectory points and informing grasping position is not required. The system automatically detects object and performs a pick place operation, resulting in relieving user from robot programming. The system is composed of a depth camera and a manipulator. Camera is the only sensor available for scene analysis and the action is performed using a Franka manipulator. The two components work in request-response mode over ROS. This thesis introduces a newer approaches such as, dividing an workspace image into its constituent object images and performing object detection, creating training data, generating grasp points based on object shape along length of an object. The thesis also presents a case study where three different objects are chosen as test objects. The experiments are a demonstration of the methods applied and efficiency attained. The case study also provides a glimpse of the future research and development areas

    3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    Get PDF
    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.The research leading to these result has received funding from the Spanish Government and European FEDER funds (DPI2015-68087R), the Valencia Regional Government (PROMETEO/2013/085) as well as the pre-doctoral grant BES-2013-062864

    Visibility in underwater robotics: Benchmarking and single image dehazing

    Get PDF
    Dealing with underwater visibility is one of the most important challenges in autonomous underwater robotics. The light transmission in the water medium degrades images making the interpretation of the scene difficult and consequently compromising the whole intervention. This thesis contributes by analysing the impact of the underwater image degradation in commonly used vision algorithms through benchmarking. An online framework for underwater research that makes possible to analyse results under different conditions is presented. Finally, motivated by the results of experimentation with the developed framework, a deep learning solution is proposed capable of dehazing a degraded image in real time restoring the original colors of the image.Una de las dificultades más grandes de la robótica autónoma submarina es lidiar con la falta de visibilidad en imágenes submarinas. La transmisión de la luz en el agua degrada las imágenes dificultando el reconocimiento de objetos y en consecuencia la intervención. Ésta tesis se centra en el análisis del impacto de la degradación de las imágenes submarinas en algoritmos de visión a través de benchmarking, desarrollando un entorno de trabajo en la nube que permite analizar los resultados bajo diferentes condiciones. Teniendo en cuenta los resultados obtenidos con este entorno, se proponen métodos basados en técnicas de aprendizaje profundo para mitigar el impacto de la degradación de las imágenes en tiempo real introduciendo un paso previo que permita recuperar los colores originales

    Learning Multi-step Robotic Manipulation Tasks through Visual Planning

    Get PDF
    Multi-step manipulation tasks in unstructured environments are extremely challenging for a robot to learn. Such tasks interlace high-level reasoning that consists of the expected states that can be attained to achieve an overall task and low-level reasoning that decides what actions will yield these states. A model-free deep reinforcement learning method is proposed to learn multi-step manipulation tasks. This work introduces a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel image input at real-time speeds (20ms). The proposed model architecture achieved a state-of-the-art accuracy on three standard grasping datasets. The adaptability of the proposed approach is demonstrated by directly transferring the trained model to a 7 DoF robotic manipulator with a grasp success rate of 95.4% and 93.0% on novel household and adversarial objects, respectively. A novel Robotic Manipulation Network (RoManNet) is introduced, which is a vision-based model architecture, to learn the action-value functions and predict manipulation action candidates. A Task Progress based Gaussian (TPG) reward function is defined to compute the reward based on actions that lead to successful motion primitives and progress towards the overall task goal. To balance the ratio of exploration/exploitation, this research introduces a Loss Adjusted Exploration (LAE) policy that determines actions from the action candidates according to the Boltzmann distribution of loss estimates. The effectiveness of the proposed approach is demonstrated by training RoManNet to learn several challenging multi-step robotic manipulation tasks in both simulation and real-world. Experimental results show that the proposed method outperforms the existing methods and achieves state-of-the-art performance in terms of success rate and action efficiency. The ablation studies show that TPG and LAE are especially beneficial for tasks like multiple block stacking
    • …
    corecore