10 research outputs found

    Technical Evaluation of the Carolo-Cup 2014 - A Competition for Self-Driving Miniature Cars

    Get PDF
    The Carolo-Cup competition conducted for the eighth time this year, is an international student competition focusing on autonomous driving scenarios implemented on 1:10 scale car models. Three practical sub-competitions have to be realized in this context and represent a complex, interdisciplinary challenge. Hence, students have to cope with all core topics like mechanical development, electronic design, and programming as addressed usually by robotic applications. In this paper we introduce the competition challenges in detail and evaluate the results of all 13 participating teams from the 2014 competition. For this purpose, we analyze technical as well as non-technical configurations of each student group and derive best practices, lessons learned, and criteria as a precondition for a successful participation. Due to the comprehensive orientation of the Carolo-Cup, this knowledge can be applied on comparable projects and related competitions as well

    A Novel Method for Extrinsic Calibration of Multiple RGB-D Cameras Using Descriptor-Based Patterns

    Full text link
    This letter presents a novel method to estimate the relative poses between RGB-D cameras with minimal overlapping fields of view in a panoramic RGB-D camera system. This calibration problem is relevant to applications such as indoor 3D mapping and robot navigation that can benefit from a 360^\circ field of view using RGB-D cameras. The proposed approach relies on descriptor-based patterns to provide well-matched 2D keypoints in the case of a minimal overlapping field of view between cameras. Integrating the matched 2D keypoints with corresponding depth values, a set of 3D matched keypoints are constructed to calibrate multiple RGB-D cameras. Experiments validated the accuracy and efficiency of the proposed calibration approach, both superior to those of existing methods (800 ms vs. 5 seconds; rotation error of 0.56 degrees vs. 1.6 degrees; and translation error of 1.80 cm vs. 2.5 cm.Comment: 6 pages, 7 figures, under review by IEEE Robotics and Automation Letters & ICR

    No Clamp Robotic Assembly with Use of Point Cloud Data from Low-Cost Triangulation Scanner

    Get PDF
    The paper shows the clamp-less assembly idea as a very important one in the modern assembly. Assembly equipment such as clamps represent a significant group of industrial equipment in manufacturing plants whose number can be effectively reduced. The article presents the concept of using industrial robot equipped with a triangulation scanner in the assembly process in order to minimize the number of clamps that hold the units in a particular position in space. It also shows how the system searches for objects in the point cloud based on multi-step processing algorithm proposed in this work, then picks them up, transports and positions them in the right assembly locations with the use of industrial robot manipulator. The accuracy of the positioning of parts was also examined as well as the impact of the number of iterations of the algorithm searching the models in the point cloud on the accuracy of determining the position of the objects. The tests show that presented system is suitable for assembly of various items as plastic packaging and palletizing of products. Such kind of system is the basis for modern, fully flexible assembly systems

    Performance Evaluation of Various 2-D Laser Scanners for Mobile Robot Map Building and Localization

    Get PDF
    A study has been carried out to investigate the performance of various 2-D laser scanners, which influence the map building quality and localization performance for a mobile robot. Laser scanners are increasingly used in automation and robotic applications. They are widely used as sensing devices for map building and localization in navigation of mobile robot. Laser scanners are commercially available, but there is very little published information on the performance comparison of various laser scanners on the mobile robot map building and localization. Hence, this work studies the performance by comparing four laser scanners which are Hokuyo URG04LX-UG01, Hokuyo UTM30LX, SICK TIM551 and Pepperl Fuchs ODM30M. The results, which are verified by comparison with the reference experimental data, indicated that the angle resolution and sensing range of laser scanner are key factors affecting the map building quality and position estimation for localization. From the experiment, laser scanner with 0.25° angle resolution is optimum enough for building a map of sufficient quality for good localization performance. With 30meter of sensing range, a laser scanner can also result in better localization performance, especially in big environment

    A review of sensor technology and sensor fusion methods for map-based localization of service robot

    Get PDF
    Service robot is currently gaining traction, particularly in hospitality, geriatric care and healthcare industries. The navigation of service robots requires high adaptability, flexibility and reliability. Hence, map-based navigation is suitable for service robot because of the ease in updating changes in environment and the flexibility in determining a new optimal path. For map-based navigation to be robust, an accurate and precise localization method is necessary. Localization problem can be defined as recognizing the robot’s own position in a given environment and is a crucial step in any navigational process. Major difficulties of localization include dynamic changes of the real world, uncertainties and limited sensor information. This paper presents a comparative review of sensor technology and sensor fusion methods suitable for map-based localization, focusing on service robot applications

    Analyse de scène rapide utilisant la vision et l'intelligence artificielle pour la préhension d'objets par un robot d'assistance

    Get PDF
    RÉSUMÉ L’assistance robotisée à l’aide de la vision est en pleine effervescence, notamment pour les personnes âgées en perte de mobilité et les personnes atteintes de troubles musculo-squelettiques. Ce mémoire met en lumière les solutions développées dans le cadre d’une maîtrise recherche du département de génie mécanique de l’École Polytechnique de Montréal. Dans ce contexte, la Kinect V2 a permis l’acquisition surfacique de scènes amenant alors le projet à la détection d’objets. Les méthodes de détection actuelles les plus robustes prennent encore beaucoup de temps de calcul, empêchant l’automatisation de la préhension d’objets par des robots dans un temps acceptable pour l’assistance des utilisateurs au quotidien. L’objectif est alors de développer un système d’analyse de scène rapide utilisant la vision et l’intelligence artificielle pour la préhension d’objets par un robot d’assistance. Ce système doit permettre de répondre à l’ensemble des questions suivantes plus rapidement que les méthodes existantes : 1. Combien y a-t-il d’objets et où se situent-ils? 2. Comment les saisir, c.-à-d. quels sont les endroits de préhension efficaces et quelle est la phase d’approche à donner au robot? 3. Quels sont ces objets de la scène, reconnus à partir d’un apprentissage neuronal sur un ensemble de données acquis avec une caméra active? Avec l’acquisition d’un ensemble de données de 180 scènes comprenant un objet chacun, la solution a été développée en 3 étapes : 1. La détection d’objets comprenant la transformation des scènes brutes acquises en données matricielles et la segmentation 3D des scènes pour trouver les objets à l’aide d’un algorithme innovant de « palpage par le haut » suivi de l’élimination des points indésirables par calcul de leur gradient. 2. Apprentissage supervisé de l’ensemble des données suite aux algorithmes de détection d’objets des scènes. 3. Analyse de scène des objets comprenant l’identification des endroits de préhension des objets et la phase d’approche du bras robotique à l’aide d’un arbre de décision simple, puis l’utilisation d’un réseau neuronal combinant deux caractéristiques dont la surface et la couleur RGB nous permettant d’obtenir 83 % de performance dans un espace connu pour la reconnaissance d’objets. Cette étude démontre que l’analyse de scène rapide utilisant la vision et l’intelligence artificielle pour la préhension d’objets par un robot d’assistance en coopération avec un utilisateur peut être réalisée en un temps efficace. En effet, le système prend en moyenne 0,6 seconde pour l’analyse d’un objet dans une scène.----------ABSTRACT Vision-assisted robotic aid is a rapidly expanding field, particularly solutions developed for people affected by age-related loss of mobility and for people subject to musculoskeletal disorders. This thesis presents the series of the solutions developed in the context of a research master at the Mechanical Engineering Department of École Polytechnique de Montréal. In this context, the Kinect V2 allows for rapid surface acquisition of scenes bringing the project to focus on objects detect. The current detection methods available need a lot of computing time, preventing the full automation of prehending objects by robots, in an acceptable time, for the assistance of target users in their everyday activities. The objective of this study is therefore to develop algorithm for fast automated scene analysis and object prehension. The developed algorithm must provide answers to all the following questions faster than existing methods do: 1. How many of the objects are there, and where are they located? 2. Which coordinates on the objects are effective prehension targets and what is the favored path of approach for the robot? 3. What are the objects in the scene, as identified by a neural network on data from an active camera? With the acquisition of a dataset composed of 180 scenes with an object in each scene, the solution was developed following three stages: 1. Object detection involving transformation of raw scenes into data matrices and 3D scene segmentation to find the objects, by means of a novel algorithm for “top-down probing”. This is followed by elimination of undesirable points based on their gradients. 2. After object detection, supervised learning is performed on the objects in the dataset. 3. Scenes containing the objects are analyzed, which includes identification of grasping targets on the objects using a simple decision tree, and selection of the approach path of the robotic arm for full prehension. Subsequently, a neural network performs object recognition utilizing surface geometry and RGB color, yielding 83% performance in a controlled environment. This study has shown that fast scene analysis for robotic prehension of objects in cooperation with a user can be performed with effective promptness. Indeed, the system requires on average 0.6 seconds to analyze an object in a scene

    Autonomous Navigation of Mobile Robot Using Modular Architecture for Unstructured Environment

    Get PDF
    This article proposes a solution for autonomous navigation of mobile robot based on distributed control architecture. In this architecture, each stage of the algorithm is divided into separate software modules capable of interfacing to each other to obtain an effective global solution. The work starts with selection of suitable sensors depending on their requirement for the purpose and for the present work a stereo vision module and a laser range finder are used. These sensors are integrated with the robot controller via Ethernet/USB and the sensory feedbacks are used to control and navigate the robot. Using the architecture, an algorithm has been developed and implemented to intelligently avoid dynamic obstacles and optimally re-planning the path to reach the target location. The algorithm has been successfully tested with a Summit_XL mobile robot. The thesis describing the present research work is divided into eight chapters. The subject of the topic its contextual relevance and the related matters including the objectives of the work are presented in Chapter 1. The reviews on several diverse streams of literature on different issues of the topic such as autonomous navigation using various combinations of sensors networks, SLAM, obstacle detection and avoidance etc. are presented in Chapter 2. In Chapter 3, selected methodologies are explained. Chapter 4 presents the detail description of the sensors, automobile platform and software tools used to implement the developed methodology. In Chapter 5, detail view of the experimental setup is provided. Procedures and parametric evaluations are given in chapter 6. Successful indoor tests results are described in chapter 7. Finally, Chapter 8 presents the conclusion and future scope of the research work

    Are laser scanners replaceable by Kinect sensors in robotic applications?

    No full text
    corecore