11 research outputs found

    Evaluation of Modern Laser Based Indoor SLAM Algorithms

    Get PDF
    One of the key issues that prevents creation of a truly autonomous mobile robot is the simultaneous localization and mapping (SLAM) problem. A solution is supposed to estimate a robot pose and to build a map of an unknown environment simultaneously. Despite existence of different algorithms that try to solve the problem, the universal one has not been proposed yet [1]. A laser rangefinder is a widespread sensor for mobile platforms and it was decided to evaluate actual 2D laser scan based SLAM algorithms on real world indoor environments. The following algorithms were considered: Google Cartographer [2], GMapping [3], tinySLAM [4]. According to their evaluation, Cartographer and GMapping are more accurate than tinySLAM and Cartographer is the most robust of the algorithms

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained

    Design and implementation of a domestic disinfection robot based on 2D lidar

    Get PDF
    In the battle against the Covid-19, the demand for disinfection robots in China and other countries has increased rapidly. Manual disinfection is time-consuming, laborious, and has safety hazards. For large public areas, the deployment of human resources and the effectiveness of disinfection face significant challenges. Using robots for disinfection therefore becomes an ideal choice. At present, most disinfection robots on the market use ultraviolet or disinfectant to disinfect, or both. They are mostly put into service in hospitals, airports, hotels, shopping malls, office buildings, or other places with daily high foot traffic. These robots are often built-in with automatic navigation and intelligent recognition, ensuring day-to-day operations. However, they usually are expensive and need regular maintenance. The sweeping robots and window-cleaning robots have been put into massive use, but the domestic disinfection robots have not gained much attention. The health and safety of a family are also critical in epidemic prevention. This thesis proposes a low-cost, 2D lidar-based domestic disinfection robot and implements it. The robot possesses dry fog disinfection, ultraviolet disinfection, and air cleaning. The thesis is mainly engaged in the following work: The design and implementation of the control board of the robot chassis are elaborated in this thesis. The control board uses STM32F103ZET6 as the MCU. Infrared sensors are used in the robot to prevent from falling over and walk along the wall. The Ultrasonic sensor is installed in the front of the chassis to detect and avoid the path's obstacles. Photoelectric switches are used to record the information when the potential collisions happen in the early phase of mapping. The disinfection robot adopts a centrifugal fan and HEPA filter for air purification. The ceramic atomizer is used to break up the disinfectant's molecular structure to produce the dry fog. The UV germicidal lamp is installed at the bottom of the chassis to disinfect the ground. The robot uses an air pollution sensor to estimate the air quality. Motors are used to drive the chassis to move. The lidar transmits its data to the navigation board directly through the wires and the edge-board contact on the control board. The control board also manages the atmosphere LEDs, horn, press-buttons, battery, LDC, and temperature-humidity sensor. It exchanges data with and executes the command from the navigation board and manages all kinds of peripheral devices. Thus, it is the administrative unit of the disinfection robot. Moreover, the robot is designed in a way that reduces costs while ensuring quality. The control board’s embedded software is realized and analyzed in the thesis. The communication protocol that links the control board and the navigation board is implemented in software. Standard commands, specific commands, error handling, and the data packet format are detailed and processed in software. The software effectively drives and manages the peripheral devices. SLAMWARE CORE is used as the navigation board to complete the system design. System tests like disinfecting, mapping, navigating, and anti-falling were performed to polish and adjust the structure and functionalities of the robot. Raspberry Pi is also used with the control board to explore 2D Simultaneous Localization and Mapping (SLAM) algorithms, such as Hector, Karto, and Cartographer, in Robot Operating System (ROS) for the robot’s further development. The thesis is written from the perspective of engineering practice and proposes a feasible design for a domestic disinfection robot. Hardware, embedded software, and system tests are covered in the thesis

    Cartographie, localisation et planification simultanées ‘en ligne’, à long terme et à grande échelle pour robot mobile

    Get PDF
    Pour être en mesure de naviguer dans des endroits inconnus et non structurés, un robot doit pouvoir cartographier l’environnement afin de s’y localiser. Ce problème est connu sous le nom de cartographie et localisation simultanées (ou SLAM pour Simultaneous Localization and Mapping). Une fois la carte de l’environnement créée, des tâches requérant un déplacement d’un endroit connu à un autre peuvent ainsi être planifiées. La charge de calcul du SLAM est dépendante de la grandeur de la carte. Un robot a une puissance de calcul embarquée limitée pour arriver à traiter l’information ‘en ligne’, c’est-à-dire à bord du robot avec un temps de traitement des données moins long que le temps d’acquisition des données ou le temps maximal permis de mise à jour de la carte. La navigation du robot tout en faisant le SLAM est donc limitée par la taille de l’environnement à cartographier. Pour résoudre cette problématique, l’objectif est de développer un algorithme de SPLAM (Simultaneous Planning Localization and Mapping) permettant la navigation peu importe la taille de l’environment. Pour gérer efficacement la charge de calcul de cet algorithme, la mémoire du robot est divisée en une mémoire de travail et une mémoire à long terme. Lorsque la contrainte de traitement ‘en ligne’ est atteinte, les endroits vus les moins souvent et qui ne sont pas utiles pour la navigation sont transférées de la mémoire de travail à la mémoire à long terme. Les endroits transférés dans la mémoire à long terme ne sont plus utilisés pour la navigation. Cependant, ces endroits transférés peuvent être récupérées de la mémoire à long terme à la mémoire de travail lorsque le le robot s’approche d’un endroit voisin encore dans la mémoire de travail. Le robot peut ainsi se rappeler incrémentalement d’une partie de l’environment a priori oubliée afin de pouvoir s’y localiser pour le suivi de trajectoire. L’algorithme, nommé RTAB-Map, a été testé sur le robot AZIMUT-3 dans une première expérience de cartographie sur cinq sessions indépendantes, afin d’évaluer la capacité du système à fusionner plusieurs cartes ‘en ligne’. La seconde expérience, avec le même robot utilisé lors de onze sessions totalisant 8 heures de déplacement, a permis d’évaluer la capacité du robot de naviguer de façon autonome tout en faisant du SLAM et planifier des trajectoires continuellement sur une longue période en respectant la contrainte de traitement ‘en ligne’ . Enfin, RTAB-Map est comparé à d’autres systèmes de SLAM sur quatre ensembles de données populaires pour des applications de voiture autonome (KITTI), balayage à la main avec une caméra RGB-D (TUM RGB-D), de drone (EuRoC) et de navigation intérieur avec un robot PR2 (MIT Stata Center). Les résultats montrent que RTAB-Map peut être utilisé sur de longue période de temps en navigation autonome tout en respectant la contrainte de traitement ‘en ligne’ et avec une qualité de carte comparable aux approches de l’état de l’art en SLAM visuel et avec télémètre laser. ll en résulte d’un logiciel libre déployé dans une multitude d’applications allant des robots mobiles intérieurs peu coûteux aux voitures autonomes, en passant par les drones et la modélisation 3D de l’intérieur d’une maison

    Deep Learning-Based Robotic Perception for Adaptive Facility Disinfection

    Get PDF
    Hospitals, schools, airports, and other environments built for mass gatherings can become hot spots for microbial pathogen colonization, transmission, and exposure, greatly accelerating the spread of infectious diseases across communities, cities, nations, and the world. Outbreaks of infectious diseases impose huge burdens on our society. Mitigating the spread of infectious pathogens within mass-gathering facilities requires routine cleaning and disinfection, which are primarily performed by cleaning staff under current practice. However, manual disinfection is limited in terms of both effectiveness and efficiency, as it is labor-intensive, time-consuming, and health-undermining. While existing studies have developed a variety of robotic systems for disinfecting contaminated surfaces, those systems are not adequate for intelligent, precise, and environmentally adaptive disinfection. They are also difficult to deploy in mass-gathering infrastructure facilities, given the high volume of occupants. Therefore, there is a critical need to develop an adaptive robot system capable of complete and efficient indoor disinfection. The overarching goal of this research is to develop an artificial intelligence (AI)-enabled robotic system that adapts to ambient environments and social contexts for precise and efficient disinfection. This would maintain environmental hygiene and health, reduce unnecessary labor costs for cleaning, and mitigate opportunity costs incurred from infections. To these ends, this dissertation first develops a multi-classifier decision fusion method, which integrates scene graph and visual information, in order to recognize patterns in human activity in infrastructure facilities. Next, a deep-learning-based method is proposed for detecting and classifying indoor objects, and a new mechanism is developed to map detected objects in 3D maps. A novel framework is then developed to detect and segment object affordance and to project them into a 3D semantic map for precise disinfection. Subsequently, a novel deep-learning network, which integrates multi-scale features and multi-level features, and an encoder network are developed to recognize the materials of surfaces requiring disinfection. Finally, a novel computational method is developed to link the recognition of object surface information to robot disinfection actions with optimal disinfection parameters

    SLAM Monocular en tiempo real

    Get PDF
    Una de las tareas fundamentales que debe poder ejecutar un robot móvil es la navegación autónoma de su entorno de trabajo, para lo que se requiere de un modelo del entorno o mapa y un método para estimar la localización en éste. Sin embargo son numerosas las situaciones en las que no se dispone, a priori, de una representación del entorno, por ejemplo en labores de búsqueda, rescate, exploración planetaria, exploración oceánica y minería subterránea. En tales circunstancias se deberán resolver de manera simultánea ambos problemas, la localización y el mapeo. En efecto, la estimación de la localización requiere de un mapa, y a su vez, para elaborar un mapa es necesario establecer una localización con relación a un modelo. La solución simultánea de estos dos problemas se conoce en robótica como SLAM (Simultaneous Localization and Mapping). Desde su formulación hace más de treinta años, la comunidad científica vinculada a la robótica ha invertido esfuerzo y energía en la solución del SLAM. En la actualidad se considera un componente fundamental de los sistemas robóticos, permitiéndoles realizar tareas más complejas y por tanto, otorgándoles mayores niveles de autonomía. Si bien el SLAM bidimensional para entornos interiores de pequeña escala se considera un problema resuelto con el que se obtienen resultados consistentes, una vez se extiende a la estimación y reconstrucción tridimensional o a entornos de grandes dimensiones, nuevos retos de investigación surgen de inmediato. Para el caso particular del SLAM tridimensional con sensores visuales, algunos de los nuevos problemas que deben ser resueltos son: mayor complejidad y costo computacional debido al gran volumen de datos a procesar, errores debidos a la baja resolución de los sensores, cambios de iluminación en el entorno, superficies con falta de textura e imágenes borrosas por movimientos ápidos de la cámara. En esta tesis se adelanta un estudio sistemático y riguroso del SLAM, desde su formulación y métodos de solución, hasta la evaluación de algunos de los algoritmos SLAM de código abierto más recientes. De manera particular se aborda el problema del SLAM monocular en tiempo real y se conducen experimentos en entornos interiores con un sistema robótico especialmente diseñado para tal fin. Las principales contribuciones de este trabajo son: - El estudio sistemático y exhaustivo del SLAM, desde su formulación hasta los métodos de solución más representativos (Capítulo 2). - La formulación de un método de evaluación de los algoritmos SLAM con base en dos métricas (MeC y MoC) que consideran la calidad de los mapas producidos (Capítulo 3). - El diseño y construcción de un sistema robótico totalmente compatible con ROS (Robot Operating System) para la validación experimental conducida en esta tesis y la investigación y desarrollo de aplicaciones (Capítulo 3). - El estudio riguroso de los algoritmos visual SLAM que hacen parte del estado del arte actual y, de manera particular, los métodos relativos a los problemas fM (Structure from Motion) y el SLAM monocular en tiempo real (Capítulos 4 y 5).Abstract: One of the fundamental tasks that a mobile robot must be able to execute is the autonomous navigation of its working environment, for which a model of the environment or map is required and a method to estimate the location in it. However, there are numerous situations in which there is not available, a priori, a representation of the environment, for example in search and rescue, planetary exploration, ocean exploration and underground mining. In that circumstances, both problems, location and mapping must be solved simultaneously. In effect, location estimation requires a map, and in turn, to create a map it is necessary to establish a location in relation to a model. The simultaneous solution of these two problems is knows in robotics as SLAM (Simultaneous Localization and Mapping). Since its formulation more than thirty years ago, the research community linked to robotics has invested effort and energy in the solution of SLAM. Nowadays, it is considered a fundamental component of robotic systems, allowing them to perform more complex tasks and therefore, giving them greater levels of autonomy. Although the two-dimensional SLAM for small-scale indoor environments is considered a solved problem, with consistent results, when it extends to three dimensional estimation and reconstruction or to large-scale environments, new research challenges emerge immediately. Particularly, for the three-dimensional SLAM with only visual sensors, some of the new problems that must be solved are: greater complexity and computational cost due to the large volume of data to be processed, errors due to low resolution of the sensors, lighting changes in the environment, surfaces with lack of texture and blurry images due to rapid movements of the camera. In this thesis a systematic and rigorous study of the SLAM is carried out, from its formulation and solution methods, until the evaluation of some of the most recent open source SLAM algorithms. In particular, the monocular SLAM problem in real time is addressed and experiments are conducted in indoor environments with a robotic system specially designed for this purpose. The main contributions of this work are: - A systematic and exhaustive study of SLAM, from its formulation to the most representative solution methods, is conducted (Chapter 2). - The formulation of a method for evaluating SLAM algorithms based on two metrics (MeC y MoC) that consider the quality of the maps produced (Chapter 3). - The design and construction of a robotic system with full compatibility with ROS (Robot Operating System) for the experimental validation conducted in this thesis and research and development of applications (Chapter 3). - The rigorous study of visual SLAM algorithms that are part of the current state of the art and, in particular, the methods related to SfM (Structure from Motion) and real-time monocular SLAM (Chapters 4 and 5).Doctorad

    Percepção do ambiente urbano e navegação usando visão robótica : concepção e implementação aplicado à veículo autônomo

    Get PDF
    Orientadores: Janito Vaqueiro Ferreira, Alessandro Corrêa VictorinoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: O desenvolvimento de veículos autônomos capazes de se locomover em ruas urbanas pode proporcionar importantes benefícios na redução de acidentes, no aumentando da qualidade de vida e também na redução de custos. Veículos inteligentes, por exemplo, frequentemente baseiam suas decisões em observações obtidas a partir de vários sensores tais como LIDAR, GPS e câmeras. Atualmente, sensores de câmera têm recebido grande atenção pelo motivo de que eles são de baixo custo, fáceis de utilizar e fornecem dados com rica informação. Ambientes urbanos representam um interessante mas também desafiador cenário neste contexto, onde o traçado das ruas podem ser muito complexos, a presença de objetos tais como árvores, bicicletas, veículos podem gerar observações parciais e também estas observações são muitas vezes ruidosas ou ainda perdidas devido a completas oclusões. Portanto, o processo de percepção por natureza precisa ser capaz de lidar com a incerteza no conhecimento do mundo em torno do veículo. Nesta tese, este problema de percepção é analisado para a condução nos ambientes urbanos associado com a capacidade de realizar um deslocamento seguro baseado no processo de tomada de decisão em navegação autônoma. Projeta-se um sistema de percepção que permita veículos robóticos a trafegar autonomamente nas ruas, sem a necessidade de adaptar a infraestrutura, sem o conhecimento prévio do ambiente e considerando a presença de objetos dinâmicos tais como veículos. Propõe-se um novo método baseado em aprendizado de máquina para extrair o contexto semântico usando um par de imagens estéreo, a qual é vinculada a uma grade de ocupação evidencial que modela as incertezas de um ambiente urbano desconhecido, aplicando a teoria de Dempster-Shafer. Para a tomada de decisão no planejamento do caminho, aplica-se a abordagem dos tentáculos virtuais para gerar possíveis caminhos a partir do centro de referencia do veículo e com base nisto, duas novas estratégias são propostas. Em primeiro, uma nova estratégia para escolher o caminho correto para melhor evitar obstáculos e seguir a tarefa local no contexto da navegação hibrida e, em segundo, um novo controle de malha fechada baseado na odometria visual e o tentáculo virtual é modelado para execução do seguimento de caminho. Finalmente, um completo sistema automotivo integrando os modelos de percepção, planejamento e controle são implementados e validados experimentalmente em condições reais usando um veículo autônomo experimental, onde os resultados mostram que a abordagem desenvolvida realiza com sucesso uma segura navegação local com base em sensores de câmeraAbstract: The development of autonomous vehicles capable of getting around on urban roads can provide important benefits in reducing accidents, in increasing life comfort and also in providing cost savings. Intelligent vehicles for example often base their decisions on observations obtained from various sensors such as LIDAR, GPS and Cameras. Actually, camera sensors have been receiving large attention due to they are cheap, easy to employ and provide rich data information. Inner-city environments represent an interesting but also very challenging scenario in this context, where the road layout may be very complex, the presence of objects such as trees, bicycles, cars might generate partial observations and also these observations are often noisy or even missing due to heavy occlusions. Thus, perception process by nature needs to be able to deal with uncertainties in the knowledge of the world around the car. While highway navigation and autonomous driving using a prior knowledge of the environment have been demonstrating successfully, understanding and navigating general inner-city scenarios with little prior knowledge remains an unsolved problem. In this thesis, this perception problem is analyzed for driving in the inner-city environments associated with the capacity to perform a safe displacement based on decision-making process in autonomous navigation. It is designed a perception system that allows robotic-cars to drive autonomously on roads, without the need to adapt the infrastructure, without requiring previous knowledge of the environment and considering the presence of dynamic objects such as cars. It is proposed a novel method based on machine learning to extract the semantic context using a pair of stereo images, which is merged in an evidential grid to model the uncertainties of an unknown urban environment, applying the Dempster-Shafer theory. To make decisions in path-planning, it is applied the virtual tentacle approach to generate possible paths starting from ego-referenced car and based on it, two news strategies are proposed. First one, a new strategy to select the correct path to better avoid obstacles and to follow the local task in the context of hybrid navigation, and second, a new closed loop control based on visual odometry and virtual tentacle is modeled to path-following execution. Finally, a complete automotive system integrating the perception, path-planning and control modules are implemented and experimentally validated in real situations using an experimental autonomous car, where the results show that the developed approach successfully performs a safe local navigation based on camera sensorsDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânic
    corecore