253 research outputs found

    Time-of-flight-assisted Kinect camera-based people detection for intuitive human robot cooperation in the surgical operating room

    Get PDF
    Scene supervision is a major tool to make medical robots safer and more intuitive. The paper shows an approach to efficiently use 3D cameras within the surgical operating room to enable for safe human robot interaction and action perception. Additionally the presented approach aims to make 3D camera-based scene supervision more reliable and accurate

    Efficient Implementation of Parallel Path Planning Algorithms on GPUs

    Get PDF
    In robot systems several computationally intensivetasks can be found, with path planning being one of them.Especially in dynamically changing environments, it is difficult tomeet real-time constraints with a serial processing approach. Forthose systems using standard computers, a promising option is toemploy a GPGPU as a coprocessor in order to offload those taskswhich can be efficiently parallelized. We implemented selectedparallel path planning algorithms on NVIDIA's CUDA platformand were able to accelerate all of these algorithms efficientlycompared to a multi-core implementation. We present the resultsand more detailed information about the implementation of thesealgorithms

    Event-Driven Technologies for Reactive Motion Planning: Neuromorphic Stereo Vision and Robot Path Planning and Their Application on Parallel Hardware

    Get PDF
    Die Robotik wird immer mehr zu einem Schlüsselfaktor des technischen Aufschwungs. Trotz beeindruckender Fortschritte in den letzten Jahrzehnten, übertreffen Gehirne von Säugetieren in den Bereichen Sehen und Bewegungsplanung noch immer selbst die leistungsfähigsten Maschinen. Industrieroboter sind sehr schnell und präzise, aber ihre Planungsalgorithmen sind in hochdynamischen Umgebungen, wie sie für die Mensch-Roboter-Kollaboration (MRK) erforderlich sind, nicht leistungsfähig genug. Ohne schnelle und adaptive Bewegungsplanung kann sichere MRK nicht garantiert werden. Neuromorphe Technologien, einschließlich visueller Sensoren und Hardware-Chips, arbeiten asynchron und verarbeiten so raum-zeitliche Informationen sehr effizient. Insbesondere ereignisbasierte visuelle Sensoren sind konventionellen, synchronen Kameras bei vielen Anwendungen bereits überlegen. Daher haben ereignisbasierte Methoden ein großes Potenzial, schnellere und energieeffizientere Algorithmen zur Bewegungssteuerung in der MRK zu ermöglichen. In dieser Arbeit wird ein Ansatz zur flexiblen reaktiven Bewegungssteuerung eines Roboterarms vorgestellt. Dabei wird die Exterozeption durch ereignisbasiertes Stereosehen erreicht und die Pfadplanung ist in einer neuronalen Repräsentation des Konfigurationsraums implementiert. Die Multiview-3D-Rekonstruktion wird durch eine qualitative Analyse in Simulation evaluiert und auf ein Stereo-System ereignisbasierter Kameras übertragen. Zur Evaluierung der reaktiven kollisionsfreien Online-Planung wird ein Demonstrator mit einem industriellen Roboter genutzt. Dieser wird auch für eine vergleichende Studie zu sample-basierten Planern verwendet. Ergänzt wird dies durch einen Benchmark von parallelen Hardwarelösungen wozu als Testszenario Bahnplanung in der Robotik gewählt wurde. Die Ergebnisse zeigen, dass die vorgeschlagenen neuronalen Lösungen einen effektiven Weg zur Realisierung einer Robotersteuerung für dynamische Szenarien darstellen. Diese Arbeit schafft eine Grundlage für neuronale Lösungen bei adaptiven Fertigungsprozesse, auch in Zusammenarbeit mit dem Menschen, ohne Einbußen bei Geschwindigkeit und Sicherheit. Damit ebnet sie den Weg für die Integration von dem Gehirn nachempfundener Hardware und Algorithmen in die Industrierobotik und MRK

    Autonomous Quadrotor Navigation by Detecting Vanishing Points in Indoor Environments

    Get PDF
    abstract: Toward the ambitious long-term goal of a fleet of cooperating Flexible Autonomous Machines operating in an uncertain Environment (FAME), this thesis addresses various perception and control problems in autonomous aerial robotics. The objective of this thesis is to motivate the use of perspective cues in single images for the planning and control of quadrotors in indoor environments. In addition to providing empirical evidence for the abundance of such cues in indoor environments, the usefulness of these perspective cues is demonstrated by designing a control algorithm for navigating a quadrotor in indoor corridors. An Extended Kalman Filter (EKF), implemented on top of the vision algorithm, serves to improve the robustness of the algorithm to changing illumination. In this thesis, vanishing points are the perspective cues used to control and navigate a quadrotor in an indoor corridor. Indoor corridors are an abundant source of parallel lines. As a consequence of perspective projection, parallel lines in the real world, that are not parallel to the plane of the camera, intersect at a point in the image. This point is called the vanishing point of the image. The vanishing point is sensitive to the lateral motion of the camera and hence the quadrotor. By tracking the position of the vanishing point in every image frame, the quadrotor can navigate along the center of the corridor. Experiments are conducted using the Augmented Reality (AR) Drone 2.0. The drone is equipped with the following componenets: (1) 720p forward facing camera for vanishing point detection, (2) 240p downward facing camera, (3) Inertial Measurement Unit (IMU) for attitude control , (4) Ultrasonic sensor for estimating altitude, (5) On-board 1 GHz Processor for processing low level commands. The reliability of the vision algorithm is presented by flying the drone in indoor corridors.Dissertation/ThesisMasters Thesis Electrical Engineering 201

    High-Performance Testbed for Vision-Aided Autonomous Navigation for Quadrotor UAVs in Cluttered Environments

    Get PDF
    This thesis presents the development of an aerial robotic testbed based on Robot Operating System (ROS). The purpose of this high-performance testbed is to develop a system capable of performing robust navigation tasks using vision tools such as a stereo camera. While ensuring the computation of robot odometery, the system is also capable of sensing the environment using the same stereo camera. Hence, all the navigation tasks are performed using a stereo camera and an inertial measurement unit (IMU) as the main sensor suite. ROS is used as a framework for software integration due to its capabilities to provide efficient communication and sensor interfaces. Moreover, it also allows us to use C++ which is efficient in performance especially on embedded platforms. Combining together ROS and C++ provides the necessary computation efficiency and tools to handle fast, real-time image processing and planning which are the vital parts of navigation and obstacle avoidance on such scale. The main application of this work revolves around proposing a real-time and efficient way to demonstrate vision-based navigation in UAVs. The proposed approach is developed for a quadrotor UAV which is capable of performing defensive maneuvers in case any obstacles are in its way, while constantly moving towards a user-defined final destination. Stereo depth computation adds a third axis to a two dimensional image coordinate frame. This can be referred to as the depth image space or depth image coordinate frame. The idea of planning in this frame of reference is utilized along with certain precomputed action primitives. The formulation of these action primitives leads to a hybrid control law for feasible trajectory generation. Further, a proof of stability of this system is also presented. The proposed approach keeps in view the fact that while performing fast maneuvers and obstacle avoidance simultaneously, many of the standard optimization approaches might not work in real-time on-board due to time and resource limitations. This leads to a need for the development of real-time techniques for vision-based autonomous navigation

    Physical Interaction of Autonomous Robots in Complex Environments

    Get PDF
    Recent breakthroughs in the fields of computer vision and robotics are firmly changing the people perception about robots. The idea of robots that substitute humansisnowturningintorobotsthatcollaboratewiththem. Serviceroboticsconsidersrobotsaspersonalassistants. Itsafelyplacesrobotsindomesticenvironments in order to facilitate humans daily life. Industrial robotics is now reconsidering its basic idea of robot as a worker. Currently, the primary method to guarantee the personnels safety in industrial environments is the installation of physical barriers around the working area of robots. The development of new technologies and new algorithms in the sensor field and in the robotic one has led to a new generation of lightweight and collaborative robots. Therefore, industrial robotics leveraged the intrinsic properties of this kind of robots to generate a robot co-worker that is able to safely coexist, collaborate and interact inside its workspace with both personnels and objects. This Ph.D. dissertation focuses on the generation of a pipeline for fast object pose estimation and distance computation of moving objects,in both structured and unstructured environments,using RGB-D images. This pipeline outputs the command actions which let the robot complete its main task and fulfil the safety human-robot coexistence behaviour at once. The proposed pipeline is divided into an object segmentation part,a 6D.o.F. object pose estimation part and a real-time collision avoidance part for safe human-robot coexistence. Firstly, the segmentation module finds candidate object clusters out of RGB-D images of clutter scenes using a graph-based image segmentation technique. This segmentation technique generates a cluster of pixels for each object found in the image. The candidate object clusters are then fed as input to the 6 D.o.F. object pose estimation module. The latter is in charge of estimating both the translation and the orientation in 3D space of each candidate object clusters. The object pose is then employed by the robotic arm to compute a suitable grasping policy. The last module generates a force vector field of the environment surrounding the robot, the objects and the humans. This force vector field drives the robot toward its goal while any potential collision against objects and/or humans is safely avoided. This work has been carried out at Politecnico di Torino, in collaboration with Telecom Italia S.p.A

    Industrial Robot Collision Handling in Harsh Environments

    Get PDF
    The focus in this thesis is on robot collision handling systems, mainly collision detection and collision avoidance for industrial robots operating in harsh environments (e.g. potentially explosive atmospheres found in the oil and gas sector). Collision detection should prevent the robot from colliding and therefore avoid a potential accident. Collision avoidance builds on the concept of collision detection and aims at enabling the robot to find a collision free path circumventing the obstacle and leading to the goal position. The work has been done in collaboration with ABB Process Automation Division with focus on applications in oil and gas. One of the challenges in this work has been to contribute to safer use of industrial robots in potentially explosive environments. One of the main ideas is that a robot should be able to work together with a human as a robotic co-worker on for instance an oil rig. The robot should then perform heavy lifting and precision tasks, while the operator controls the steps of the operation through typically a hand-held interface. In such situations, when the human works alongside with the robot in potentially explosive environments, it is important that the robot has a way of handling collisions. The work in this thesis presents solutions for collision detection in paper A, B and C, thereafter solutions for collision avoidance are presented in paper D and E. Paper A approaches the problem of collision avoidance comparing an expert system and a hidden markov model (HMM) approach. An industrial robot equipped with a laser scanner is used to gather environment data on arbitrary set of points in the work cell. The two methods are used to detect obstacles within the work cell and shows a different set of strengths. The expert system shows an advantage in algorithm performance and the HMM method shows its strength in its ease of learning models of the environment. Paper B builds upon Paper A by incorporating a CAD model of the environment. The CAD model allows for a very fast setup of the expert system where no manual map creation is needed. The HMM can be trained based on the CAD model, which addresses the previous dependency on real sensor data for training purposes. Paper C compares two different world-model representation techniques, namely octrees and point clouds using both a graphics processing unit (GPU) and a central processing unit (CPU). The GPU showed its strength for uncompressed point clouds and high resolution point cloud models. However, if the resolution gets low enough, the CPU starts to outperform the GPU. This shows that parallel problems containing large data sets are suitable for GPU processing, but smaller parallel problems are still handled better by the CPU. In paper D, real-time collision avoidance is studied for a lightweight industrial robot using a development platform controller. A Microsoft Kinect sensor is used for capturing 3D depth data of the environment. The environment data is used together with an artificial potential fields method for generating virtual forces used for obstacle avoidance. The forces are projected onto the end-effector, preventing collision with the environment while moving towards the goal. Forces are also projected on to the elbow of the 7-Degree of freedom robot, which allows for nullspace movement. The algorithms for manipulating the sensor data and calculating virtual forces were developed for the GPU, this resulted in fast algorithms and is the enabling factor for real-time collision avoidance. Finally, paper E builds on the work in paper D by providing a framework for using the algorithms on a standard industrial controller and robot with minimal modifications. Further, algorithms were specifically developed for the robot controller to handle reactive movement. In addition, a full collision avoidance system for an end-user application which is very simple to implement is presented. The work described in this thesis presents solutions for collision detection and collision avoidance for safer use of robots. The work is also a step towards making businesses more competitive by enabling easy integration of collision handling for industrial robots

    Massively parallelizing the RRT and the RRT*

    Get PDF
    In recent years, the growth of the computational power available in the Central Processing Units (CPUs) of consumer computers has tapered significantly. At the same time, growth in the computational power available in the Graphics Processing Units (GPUs) has remained strong. Algorithms that can be implemented on GPUs today are not only limited to graphics processing, but include scientific computation and beyond. This paper is concerned with massively parallel implementations of incremental sampling-based robot motion planning algorithms, namely the widely-used Rapidly-exploring Random Tree (RRT) algorithm and its asymptotically-optimal counterpart called RRT*. We demonstrate an example implementation of RRT and RRT* motion-planning algorithm for a high-dimensional robotic manipulator that takes advantage of an NVidia CUDA-enabled GPU. We focus on parallelizing the collision-checking procedure, which is generally recognized as the computationally expensive component of sampling-based motion planning algorithms. Our experimental results indicate significant speedup when compared to CPU implementations, leading to practical algorithms for optimal motion planning in high-dimensional configuration spaces
    corecore