980 research outputs found

    Radar-on-Lidar: metric radar localization on prior lidar maps

    Full text link
    Radar and lidar, provided by two different range sensors, each has pros and cons of various perception tasks on mobile robots or autonomous driving. In this paper, a Monte Carlo system is used to localize the robot with a rotating radar sensor on 2D lidar maps. We first train a conditional generative adversarial network to transfer raw radar data to lidar data, and achieve reliable radar points from generator. Then an efficient radar odometry is included in the Monte Carlo system. Combining the initial guess from odometry, a measurement model is proposed to match the radar data and prior lidar maps for final 2D positioning. We demonstrate the effectiveness of the proposed localization framework on the public multi-session dataset. The experimental results show that our system can achieve high accuracy for long-term localization in outdoor scenes

    Fusing sonars and LRF data to perform SLAM in reduced visibility scenarios

    Get PDF
    Simultaneous Localization and Mapping (SLAM) approaches have evolved considerably in recent years. However, there are many situations which are not easily handled, such as the case of smoky, dusty, or foggy environments where commonly used range sensors for SLAM are highly disturbed by noise induced in the measurement process by particles of smoke, dust or steam. This work presents a sensor fusion method for range sensing in Simultaneous Localization and Mapping (SLAM) under reduced visibility conditions. The proposed method uses the complementary characteristics between a Laser Range Finder (LRF) and an array of sonars in order to ultimately map smoky environments. The method was validated through experiments in a smoky indoor scenario, and results showed that it is able to adequately cope with induced disturbances, thus decreasing the impact of smoke particles in the mapping task

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained

    Simultaneous Localization and Mapping Technologies

    Get PDF
    Il problema dello SLAM (Simultaneous Localization And Mapping) consiste nel mappare un ambiente sconosciuto per mezzo di un dispositivo che si muove al suo interno, mentre si effettua la localizzazione di quest'ultimo. All'interno di questa tesi viene analizzato il problema dello SLAM e le differenze che lo contraddistinguono dai problemi di mapping e di localizzazione trattati separatamente. In seguito, si effettua una analisi dei principali algoritmi impiegati al giorno d'oggi per la sua risoluzione, ovvero i filtri estesi di Kalman e i particle filter. Si analizzano poi le diverse tecnologie implementative esistenti, tra le quali figurano sistemi SONAR, sistemi LASER, sistemi di visione e sistemi RADAR; questi ultimi, allo stato dell'arte, impiegano onde millimetriche (mmW) e a banda larga (UWB), ma anche tecnologie radio già affermate, fra le quali il Wi-Fi. Infine, vengono effettuate delle simulazioni di tecnologie basate su sistema di visione e su sistema LASER, con l'ausilio di due pacchetti open source di MATLAB. Successivamente, il pacchetto progettato per sistemi LASER è stato modificato al fine di simulare una tecnologia SLAM basata su segnali Wi-Fi. L'utilizzo di tecnologie a basso costo e ampiamente diffuse come il Wi-Fi apre alla possibilità, in un prossimo futuro, di effettuare localizzazione indoor a basso costo, sfruttando l'infrastruttura esistente, mediante un semplice smartphone. Più in prospettiva, l'avvento della tecnologia ad onde millimetriche (5G) consentirà di raggiungere prestazioni maggiori

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained

    External multi-modal imaging sensor calibration for sensor fusion: A review

    Get PDF
    Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, Innovación y Universidades | Ref. PID2019-108816RB-I0

    SLAM research for port AGV based on 2D LIDAR

    Get PDF
    With the increase in international trade, the transshipment of goods at international container ports is very busy. The AGV (Automated Guided Vehicle) has been used as a new generation of automated container horizontal transport equipment. The AGV is an automated unmanned vehicle that can work 24 hours a day, increasing productivity and reducing labor costs compared to using container trucks. The ability to obtain information about the surrounding environment is a prerequisite for the AGV to automatically complete tasks in the port area. At present, the method of AGV based on RFID tag positioning and navigation has a problem of excessive cost. This dissertation has carried out a research on applying light detection and ranging (LIDAR) simultaneous localization and mapping (SLAM) technology to port AGV. In this master's thesis, a mobile test platform based on a laser range finder is developed to scan 360-degree environmental information (distance and angle) centered on the LIDAR and upload the information to a real-time database to generate surrounding environmental maps, and the obstacle avoidance strategy was developed based on the acquired information. The effectiveness of the platform was verified by the experiments from multiple scenarios. Then based on the first platform, another experimental platform with encoder and IMU sensor was developed. In this platform, the functionality of SLAM is enabled by the GMapping algorithm and the installation of the encoder and IMU sensor. Based on the established environment SLAM map, the path planning and obstacle avoidance functions of the platform were realized.Com o aumento do comércio internacional, o transbordo de mercadorias em portos internacionais de contentores é muito movimentado. O AGV (“Automated Guided Vehicle”) foi usado como uma nova geração de equipamentos para transporte horizontal de contentores de forma automatizada. O AGV é um veículo não tripulado automatizado que pode funcionar 24 horas por dia, aumentando a produtividade e reduzindo os custos de mão-de-obra em comparação com o uso de camiões porta-contentores. A capacidade de obter informações sobre o ambiente circundante é um pré-requisito para o AGV concluir automaticamente tarefas na área portuária. Atualmente, o método de AGV baseado no posicionamento e navegação de etiquetas RFID apresenta um problema de custo excessivo. Nesta dissertação foi realizada uma pesquisa sobre a aplicação da tecnologia LIDAR de localização e mapeamento simultâneo (SLAM) num AGV. Uma plataforma de teste móvel baseada num telémetro a laser é desenvolvida para examinar o ambiente em redor em 360 graus (distância e ângulo), centrado no LIDAR, e fazer upload da informação para uma base de dados em tempo real para gerar um mapa do ambiente em redor. Uma estratégia de prevenção de obstáculos foi também desenvolvida com base nas informações adquiridas. A eficácia da plataforma foi verificada através da realização de testes com vários cenários e obstáculos. Por fim, com base na primeira plataforma, uma outra plataforma experimental com codificador e sensor IMU foi também desenvolvida. Nesta plataforma, a funcionalidade do SLAM é ativada pelo algoritmo GMapping e pela instalação do codificador e do sensor IMU. Com base no estabelecimento do ambiente circundante SLAM, foram realizadas as funções de planeamento de trajetória e prevenção de obstáculos pela plataforma

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    2D mapping using omni-directional mobile robot equipped with LiDAR

    Get PDF
    A room map in a robot environment is needed because it can facilitate localization, automatic navigation, and also object searching. In addition, when a room is difficult to reach, maps can provide information that is helpful to humans. In this study, an omni-directional mobile robot equipped with a LiDAR sensor has been developed for 2D mapping a room. The YDLiDAR X4 sensor is used as an indoor scanner. Raspberry Pi 3 B single board computer (SBC) is used to access LiDAR data and then send it to a computer wirelessly for processing into a map. This computer and SBC are integrated in robot operating system (ROS). The movement of the robot can use manual control or automatic navigation to explore the room. The Hector SLAM algorithm determines the position of the robot based on scan matching of the LiDAR data. The LiDAR data will be used to determine the obstacles encountered by the robot. These obstacles will be represented in occupancy grid mapping. The experimental results show that the robot is able to follow the wall using PID control. The robot can move automatically to construct maps of the actual room with an error rate of 4.59%
    corecore