508 research outputs found

    A Practical and Effective Layout for a Safe Human-Robot Collaborative Assembly Task

    Get PDF
    This work describes a layout to carry out a demonstrative assembly task, during which a collaborative robot performs pick-and-place tasks to supply an operator the parts that he/she has to assemble. In this scenario, the robot and operator share the workspace and a real time collision avoidance algorithm is implemented to modify the planned trajectories of the robot avoiding any collision with the human worker. The movements of the operator are tracked by two Microsoft Kinect v2 sensors to overcome problems related with occlusions and poor perception of a single camera. The data obtained by the two Kinect sensors are combined and then given as input to the collision avoidance algorithm. The experimental results show the effectiveness of the collision avoidance algorithm and the significant gain in terms of task times that the highest level of human-robot collaboration can bring

    Development of a methodology for the human-robot interaction based on vision systems for collaborative robotics

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Dynamic Speed and Separation Monitoring with On-Robot Ranging Sensor Arrays for Human and Industrial Robot Collaboration

    Get PDF
    This research presents a flexible and dynamic implementation of Speed and Separation Monitoring (SSM) safety measure that optimizes the productivity of a task while ensuring human safety during Human-Robot Collaboration (HRC). Unlike the standard static/fixed demarcated 2D safety zones based on 2D scanning LiDARs, this research presents a dynamic sensor setup that changes the safety zones based on the robot pose and motion. The focus of this research is the implementation of a dynamic SSM safety configuration using Time-of-Flight (ToF) laser-ranging sensor arrays placed around the centers of the links of a robot arm. It investigates the viability of on-robot exteroceptive sensors for implementing SSM as a safety measure. Here the implementation of varying dynamic SSM safety configurations based on approaches of measuring human-robot separation distance and relative speeds using the sensor modalities of ToF sensor arrays, a motion-capture system, and a 2D LiDAR is shown. This study presents a comparative analysis of the dynamic SSM safety configurations in terms of safety, performance, and productivity. A system of systems (cyber-physical system) architecture for conducting and analyzing the HRC experiments was proposed and implemented. The robots, objects, and human operators sharing the workspace are represented virtually as part of the system by using a digital-twin setup. This system was capable of controlling the robot motion, monitoring human physiological response, and tracking the progress of the collaborative task. This research conducted experiments with human subjects performing a task while sharing the robot workspace under the proposed dynamic SSM safety configurations. The experiment results showed a preference for the use of ToF sensors and motion capture rather than the 2D LiDAR currently used in the industry. The human subjects felt safe and comfortable using the proposed dynamic SSM safety configuration with ToF sensor arrays. The results for a standard pick and place task showed up to a 40% increase in productivity in comparison to a 2D LiDAR

    Learning to grasp in unstructured environments with deep convolutional neural networks using a Baxter Research Robot

    Get PDF
    Recent advancements in Deep Learning have accelerated the capabilities of robotic systems in terms of visual perception, object manipulation, automated navigation, and human-robot collaboration. The capability of a robotic system to manipulate objects in unstructured environments is becoming an increasingly necessary skill. Due to the dynamic nature of these environments, traditional methods, that require expert human knowledge, fail to adapt automatically. After reviewing the relevant literature a method was proposed to utilise deep transfer learning techniques to detect object grasps from coloured depth images. A grasp describes how a robotic end-effector can be arranged to securely grasp an object and successfully lift it without slippage. In this study, a ResNet-50 convolutional neural network (CNN) model is trained on the Cornell grasp dataset. The training was completed within 30 hours using a workstation PC with accelerated GPU support via an NVIDIA Titan X. The trained grasp detection model was further evaluated with a Baxter research robot and a Microsoft Kinect-v2 and a successful grasp detection accuracy of 93.91% was achieved on a diverse set of novel objects. Physical grasping trials were conducted on a set of 8 different objects. The overall system achieves an average grasp success rate of 65.0% while performing the grasp detection in under 25 milliseconds. The results analysis concluded that the objects with reasonably straight edges and moderately pronounced heights above the table are easily detected and grasped by the system

    Development of perception module for bobotic manipulation tasks

    Get PDF
    Robots performing manipulation tasks require the accurate location and orientation of an object in space. Previously, at the Robotics Laboratory of IOC-UPC this data has been generated artificially. In order to automate the process, a perception module has been developed for providing task and motion planners with the localization and pose estimation of objects used in robot manipulation tasks. The Robot Operating System provided a great framework for incorporating vision provided by Microsoft Kinect V2 sensors and the presentation of obtained data to be used in the generation of Planning Domain Definition Language files, which define a robots environment. Localization and pose estimation was done using fiducial markers along with studying possible enhancements using deep learning methods. Perfectly calibrating hardware and setting up a system play a big role in enhancing perception accuracy and while fiducial markers provide a simple and robust solution in laboratory conditions, real world applications with varying lighting, viewing angles and partial occlusions should rely on AI visio

    Real-time Target Tracking and Following with UR5 Collaborative Robot Arm

    Get PDF
    The rise of the camera usage and their availability give opportunities for developing robotics applications and computer vision applications. Especially, recent development in depth sensing (e.g., Microsoft Kinect) allows development of new methods for Human Robot Interaction (HRI) field. Moreover, Collaborative robots (co-bots) are adapted for the manufacturing industry. This thesis focuses on HRI using the capabilities of Microsoft Kinect, Universal Robot-5 (UR5) and Robot Operating System (ROS). In this particular study, the movement of a fingertip is perceived and the same movement is repeated on the robot side. Seamless cooperation, accurate trajectory and safety during the collaboration are the most important parts of the HRI. The study aims to recognize and track the fingertip accurately and to transform it as the motion of UR5. It also aims to improve the motion performance of UR5 and interaction efficiency during collaboration. In the experimental part, nearest-point approach is used via Kinect sensor's depth image (RGB-D). The approach is based on the Euclidean distance which has robust properties against different environments. Moreover, Point Cloud Library (PCL) and its built-in filters are used for processing the depth data. After the depth data provided via Microsoft Kinect have been processed, the difference of the nearest points is transmitted to the robot via ROS. On the robot side, MoveIt! motion planner is used for the smooth trajectory. Once the data has been processed successfully and the motion code has been implemented without bugs, 84.18% total accuracy was achieved. After the improvements in motion planning and data processing, the total accuracy was increased to 94.14%. Lastly, the latency was reduced from 3-4 seconds to 0.14 seconds

    Human-robot coexistence and interaction in open industrial cells

    Get PDF
    Recent research results on human\u2013robot interaction and collaborative robotics are leaving behind the traditional paradigm of robots living in a separated space inside safety cages, allowing humans and robot to work together for completing an increasing number of complex industrial tasks. In this context, safety of the human operator is a main concern. In this paper, we present a framework for ensuring human safety in a robotic cell that allows human\u2013robot coexistence and dependable interaction. The framework is based on a layered control architecture that exploits an effective algorithm for online monitoring of relative human\u2013robot distance using depth sensors. This method allows to modify in real time the robot behavior depending on the user position, without limiting the operative robot workspace in a too conservative way. In order to guarantee redundancy and diversity at the safety level, additional certified laser scanners monitor human\u2013robot proximity in the cell and safe communication protocols and logical units are used for the smooth integration with an industrial software for safe low-level robot control. The implemented concept includes a smart human-machine interface to support in-process collaborative activities and for a contactless interaction with gesture recognition of operator commands. Coexistence and interaction are illustrated and tested in an industrial cell, in which a robot moves a tool that measures the quality of a polished metallic part while the operator performs a close evaluation of the same workpiece

    Development of an Autonomous Indoor Phenotyping Robot

    Get PDF
    In order to fully understand the interaction between phenotype and genotype x environment to improve crop performance, a large amount of phenotypic data is needed. Studying plants of a given strain under multiple environments can greatly help to reveal their interactions. To collect the labor-intensive data required to perform experiments in this area, an indoor rover has been developed, which can accurately and autonomously move between and inside growth chambers. The system uses mecanum wheels, magnetic tape guidance, a Universal Robots UR 10 robot manipulator, and a Microsoft Kinect v2 3D sensor to position various sensors in this constrained environment. Integration of the motor controllers, robot arm, and a Microsoft Kinect (v2) 3D sensor was achieved in a customized C++ program. Detecting and segmenting plants in a multi-plant environment is a challenging task, which can be aided by integration of depth data into these algorithms. Image-processing functions were implemented to filter the depth image to minimize noise and remove undesired surfaces, reducing the memory requirement and allowing the plant to be reconstructed at a higher resolution in real-time. Three-dimensional meshes representing plants inside the chamber were reconstructed using the Kinect SDK’s KinectFusion. After transforming user-selected points in camera coordinates to robot-arm coordinates, the robot arm is used in conjunction with the rover to probe desired leaves, simulating the future use of sensors such as a fluorimeter and Raman spectrometer. This paper shows the system architecture and some preliminary results of the system, as tested using a life-sized growth chamber mock-up. A comparison of using raw camera coordinates data and using KinectFusion data is presented. The results suggest that the KinectFusion pose estimation is fairly accurate, only decreasing accuracy by a few millimeters at distances of roughly 0.8 meter

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Experimental Procedure for the Metrological Characterization of Time-of-Flight Cameras for Human Body 3D Measurements

    Get PDF
    Time-of-flight cameras are widely adopted in a variety of indoor applications ranging from industrial object measurement to human activity recognition. However, the available products may differ in terms of the quality of the acquired point cloud, and the datasheet provided by the constructors may not be enough to guide researchers in the choice of the perfect device for their application. Hence, this work details the experimental procedure to assess time-of-flight cameras' error sources that should be considered when designing an application involving time-of-flight technology, such as the bias correction and the temperature influence on the point cloud stability. This is the first step towards a standardization of the metrological characterization procedure that could ensure the robustness and comparability of the results among tests and different devices. The procedure was conducted on Kinect Azure, Basler Blaze 101, and Basler ToF 640 cameras. Moreover, we compared the devices in the task of 3D reconstruction following a procedure involving the measure of both an object and a human upper-body-shaped mannequin. The experiment highlighted that, despite the results of the previously conducted metrological characterization, some devices showed evident difficulties in reconstructing the target objects. Thus, we proved that performing a rigorous evaluation procedure similar to the one proposed in this paper is always necessary when choosing the right device
    • …
    corecore