234 research outputs found

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications

    Robots in Agriculture: State of Art and Practical Experiences

    Get PDF
    The presence of robots in agriculture has grown significantly in recent years, overcoming some of the challenges and complications of this field. This chapter aims to collect a complete and recent state of the art about the application of robots in agriculture. The work addresses this topic from two perspectives. On the one hand, it involves the disciplines that lead the automation of agriculture, such as precision agriculture and greenhouse farming, and collects the proposals for automatizing tasks like planting and harvesting, environmental monitoring and crop inspection and treatment. On the other hand, it compiles and analyses the robots that are proposed to accomplish these tasks: e.g. manipulators, ground vehicles and aerial robots. Additionally, the chapter reports with more detail some practical experiences about the application of robot teams to crop inspection and treatment in outdoor agriculture, as well as to environmental monitoring in greenhouse farming

    Localization and Mapping for Autonomous Driving: Fault Detection and Reliability Analysis

    Full text link
    Autonomous driving has advanced rapidly during the past decades and has expanded its application for multiple fields, both indoor and outdoor. One of the significant issues associated with a highly automated vehicle (HAV) is how to increase the safety level. A key requirement to ensure the safety of automated driving is the ability of reliable localization and navigation, with which intelligent vehicle/robot systems could successfully make reliable decisions for the driving path or react to the sudden events occurring within the path. A map with rich environment information is essential to support autonomous driving system to meet these high requirements. Therefore, multi-sensor-based localization and mapping methods are studied in this Thesis. Although some studies have been conducted in this area, a full quality control scheme to guarantee the reliability and to detect outliers in localization and mapping systems is still lacking. The quality of the integration system has not been sufficiently evaluated. In this research, an extended Kalman filter and smoother based quality control (EKF/KS QC) scheme is investigated and has been successfully applied for different localization and mapping scenarios. An EKF/KS QC toolbox is developed in MATLAB, which can be easily embedded and applied into different localization and mapping scenarios. The major contributions of this research are: a) The equivalence between least squares and smoothing is discussed, and an extended Kalman filter-smoother quality control method is developed according to this equivalence, which can not only be used to deal with system model outlier with detection, and identification, can also be used to analyse, control and improve the system quality. Relevant mathematical models of this quality control method have been developed to deal with issues such as singular measurement covariance matrices, and numerical instability of smoothing. b) Quality control analysis is conducted for different positioning system, including Global Navigation Satellite System (GNSS) multi constellation integration for both Real Time Kinematic (RTK) and Post Processing Kinematic (PPK), and the integration of GNSS and Inertial Navigation System (INS). The results indicate PPK method can provide more reliable positioning results than RTK. With the proposed quality control method, the influence of the detected outlier can be mitigated by directly correcting the input measurement with the estimated outlier value, or by adapting the final estimation results with the estimated outlier’s influence value. c) Mathematical modelling and quality control aspects for online simultaneous localization and mapping (SLAM) are examined. A smoother based offline SLAM method is investigated with quality control. Both outdoor and indoor datasets have been tested with these SLAM methods. Geometry analysis for the SLAM system has been done according to the quality control results. The system reliability analysis is essential for the SLAM designer as it can be conducted at the early stage without real-world measurement. d) A least squares based localization method is proposed that treats the High-Definition (HD) map as a sensor source. This map-based sensor information is integrated with other perception sensors, which significantly improves localization efficiency and accuracy. Geometry analysis is undertaken with the quality measures to analyse the influence of the geometry upon the estimation solution and the system quality, which can be hints for future design of the localization system. e) A GNSS/INS aided LiDAR mapping and localization procedure is developed. A high-density map is generated offline, then, LiDAR-based localization can be undertaken online with this pre-generated map. Quality control is conducted for this system. The results demonstrate that the LiDAR based localization within map can effectively improve the accuracy and reliability compared to the GNSS/INS only system, especially during the period that GNSS signal is lost

    Visual Place Recognition: A Tutorial

    Full text link
    Localization is an essential capability for mobile robots. A rapidly growing field of research in this area is Visual Place Recognition (VPR), which is the ability to recognize previously seen places in the world based solely on images. This present work is the first tutorial paper on visual place recognition. It unifies the terminology of VPR and complements prior research in two important directions: 1) It provides a systematic introduction for newcomers to the field, covering topics such as the formulation of the VPR problem, a general-purpose algorithmic pipeline, an evaluation methodology for VPR approaches, and the major challenges for VPR and how they may be addressed. 2) As a contribution for researchers acquainted with the VPR problem, it examines the intricacies of different VPR problem types regarding input, data processing, and output. The tutorial also discusses the subtleties behind the evaluation of VPR algorithms, e.g., the evaluation of a VPR system that has to find all matching database images per query, as opposed to just a single match. Practical code examples in Python illustrate to prospective practitioners and researchers how VPR is implemented and evaluated.Comment: IEEE Robotics & Automation Magazine (RAM

    The Revisiting Problem in Simultaneous Localization and Mapping: A Survey on Visual Loop Closure Detection

    Full text link
    Where am I? This is one of the most critical questions that any intelligent system should answer to decide whether it navigates to a previously visited area. This problem has long been acknowledged for its challenging nature in simultaneous localization and mapping (SLAM), wherein the robot needs to correctly associate the incoming sensory data to the database allowing consistent map generation. The significant advances in computer vision achieved over the last 20 years, the increased computational power, and the growing demand for long-term exploration contributed to efficiently performing such a complex task with inexpensive perception sensors. In this article, visual loop closure detection, which formulates a solution based solely on appearance input data, is surveyed. We start by briefly introducing place recognition and SLAM concepts in robotics. Then, we describe a loop closure detection system's structure, covering an extensive collection of topics, including the feature extraction, the environment representation, the decision-making step, and the evaluation process. We conclude by discussing open and new research challenges, particularly concerning the robustness in dynamic environments, the computational complexity, and scalability in long-term operations. The article aims to serve as a tutorial and a position paper for newcomers to visual loop closure detection.Comment: 25 pages, 15 figure

    Improving perception and locomotion capabilities of mobile robots in urban search and rescue missions

    Get PDF
    Nasazení mobilních robotů během zásahů záchranných složek je způsob, jak učinit práci záchranářů bezpečnější a efektivnější. Na roboty jsou ale při takovém použití kladeny vyšší nároky kvůli podmínkám, které při těchto událostech panují. Roboty se musejí pohybovat po nestabilních površích, ve stísněných prostorech nebo v kouři a prachu, což ztěžuje použití některých senzorů. Lokalizace, v robotice běžná úloha spočívající v určení polohy robotu vůči danému souřadnému systému, musí spolehlivě fungovat i za těchto ztížených podmínek. V této dizertační práci popisujeme vývoj lokalizačního systému pásového mobilního robotu, který je určen pro nasazení v případě zemětřesení nebo průmyslové havárie. Nejprve je předveden lokalizační systém, který vychází pouze z měření proprioceptivních senzorů a který vyvstal jako nejlepší varianta při porovnání několika možných uspořádání takového systému. Lokalizace je poté zpřesněna přidáním měření exteroceptivních senzorů, které zpomalují kumulaci nejistoty určení polohy robotu. Zvláštní pozornost je věnována možným výpadkům jednotlivých senzorických modalit, prokluzům pásů, které u tohoto typu robotů nevyhnutelně nastávají, výpočetním nárokům lokalizačního systému a rozdílným vzorkovacím frekvencím jednotlivých senzorů. Dále se věnujeme problému kinematických modelů pro přejíždění vertikálních překážek, což je další zdroj nepřesnosti při lokalizaci pásového robotu. Díky účasti na výzkumných projektech, jejichž členy byly hasičské sbory Itálie, Německa a Nizozemska, jsme měli přístup na cvičiště určená pro přípravu na zásahy během zemětřesení, průmyslových a dopravních nehod. Přesnost našeho lokalizačního systému jsme tedy testovali v podmínkách, které věrně napodobují ty skutečné. Soubory senzorických měření a referenčních poloh, které jsme vytvořili pro testování přesnosti lokalizace, jsou veřejně dostupné a považujeme je za jeden z přínosů naší práce. Tato dizertační práce má podobu souboru tří časopiseckých publikací a jednoho článku, který je v době jejího podání v recenzním řízení.eployment of mobile robots in search and rescue missions is a way to make job of human rescuers safer and more efficient. Such missions, however, require robots to be resilient to harsh conditions of natural disasters or human-inflicted accidents. They have to operate on unstable rough terrain, in confined spaces or in sensory-deprived environments filled with smoke or dust. Localization, a common task in mobile robotics which involves determining position and orientation with respect to a given coordinate frame, faces these conditions as well. In this thesis, we describe development of a localization system for tracked mobile robot intended for search and rescue missions. We present a proprioceptive 6-degrees-of-freedom localization system, which arose from the experimental comparison of several possible sensor fusion architectures. The system was modified to incorporate exteroceptive velocity measurements, which significantly improve accuracy by reducing a localization drift. A special attention was given to potential sensor outages and failures, to track slippage that inevitably occurs with this type of robots, to computational demands of the system and to different sampling rates sensory data arrive with. Additionally, we addressed the problem of kinematic models for tracked odometry on rough terrains containing vertical obstacles. Thanks to research projects the robot was designed for, we had access to training facilities used by fire brigades of Italy, Germany and Netherlands. Accuracy and robustness of proposed localization systems was tested in conditions closely resembling those seen in earthquake aftermath and industrial accidents. Datasets used to test our algorithms are publicly available and they are one of the contributions of this thesis. We form this thesis as a compilation of three published papers and one paper in review process
    corecore