14 research outputs found

    Real-Time Simultaneous Localization and Mapping with LiDAR intensity

    Full text link
    We propose a novel real-time LiDAR intensity image-based simultaneous localization and mapping method , which addresses the geometry degeneracy problem in unstructured environments. Traditional LiDAR-based front-end odometry mostly relies on geometric features such as points, lines and planes. A lack of these features in the environment can lead to the failure of the entire odometry system. To avoid this problem, we extract feature points from the LiDAR-generated point cloud that match features identified in LiDAR intensity images. We then use the extracted feature points to perform scan registration and estimate the robot ego-movement. For the back-end, we jointly optimize the distance between the corresponding feature points, and the point to plane distance for planes identified in the map. In addition, we use the features extracted from intensity images to detect loop closure candidates from previous scans and perform pose graph optimization. Our experiments show that our method can run in real time with high accuracy and works well with illumination changes, low-texture, and unstructured environments

    LIDAR-INERTIAL LOCALIZATION WITH GROUND CONSTRAINT IN A POINT CLOUD MAP

    Get PDF
    Real-time localization is a crucial task in various applications, such as automatic vehicles (AV), robotics, and smart city. This study proposes a framework for map-aided LiDAR-inertial localization, with the objective of accurately estimating the trajectory in a point clouds map. The proposed framework addresses the localization problem through a factor graph optimization (FGO), enabling the fusion of homogenous measurements for sensor fusion and designed absolute and relative constraints. Specifically, the framework estimates the light detection and ranging (LiDAR) odometry by leveraging inertial measurement unit (IMU) and registering corresponding featured points. To eliminate the accumulative error, this paper employs a ground plane distance and a map matching error to constraint the positioning error along the trajectory. Finally, local odometry and constraints are integrated using a FGO, including LiDAR odometry, IMU pre-integration, and ground constraints, map matching constraints, and loop closure. Experimental results were evaluated on an open-source dataset, UrbanNav, with an overall localization accuracy of 2.29 m (root mean square error, RMSE)

    LiPMatch : LiDAR point cloud plane based loop-closure

    Get PDF
    This letter presents a point cloud based loop-closure method to correct long-term drifts in Light Detection and Ranging based Simultaneous Localization and Mapping systems. In the method, we formulate each keyframe as a fully-connected graph with nodes representing planes. To detect loop-closures, the proposed method employs geometric restrictions to define a similarity metric to match current keyframe and those in the map. After similarity assessment, the candidate keyframes which comply with the geometric restrictions are further checked out successively by normal constraints of planes, and validated by an improved Iterative Closest Point method. The latter also provides relative pose transformation estimation between the current keyframe and the matched keyframe in the global reference frame. Experimental results demonstrate that the proposed method is able to fulfill fast and reliable loop-closure. To benefit the community by serving a benchmark for loop-closure, the entire system is made open source on GitHub

    LiDAR-Based Place Recognition For Autonomous Driving: A Survey

    Full text link
    LiDAR-based place recognition (LPR) plays a pivotal role in autonomous driving, which assists Simultaneous Localization and Mapping (SLAM) systems in reducing accumulated errors and achieving reliable localization. However, existing reviews predominantly concentrate on visual place recognition (VPR) methods. Despite the recent remarkable progress in LPR, to the best of our knowledge, there is no dedicated systematic review in this area. This paper bridges the gap by providing a comprehensive review of place recognition methods employing LiDAR sensors, thus facilitating and encouraging further research. We commence by delving into the problem formulation of place recognition, exploring existing challenges, and describing relations to previous surveys. Subsequently, we conduct an in-depth review of related research, which offers detailed classifications, strengths and weaknesses, and architectures. Finally, we summarize existing datasets, commonly used evaluation metrics, and comprehensive evaluation results from various methods on public datasets. This paper can serve as a valuable tutorial for newcomers entering the field of place recognition and for researchers interested in long-term robot localization. We pledge to maintain an up-to-date project on our website https://github.com/ShiPC-AI/LPR-Survey.Comment: 26 pages,13 figures, 5 table

    Integrasjon av et minimalistisk sett av sensorer for kartlegging og lokalisering av landbruksroboter

    Get PDF
    Robots have recently become ubiquitous in many aspects of daily life. For in-house applications there is vacuuming, mopping and lawn-mowing robots. Swarms of robots have been used in Amazon warehouses for several years. Autonomous driving cars, despite being set back by several safety issues, are undeniably becoming the standard of the automobile industry. Not just being useful for commercial applications, robots can perform various tasks, such as inspecting hazardous sites, taking part in search-and-rescue missions. Regardless of end-user applications, autonomy plays a crucial role in modern robots. The essential capabilities required for autonomous operations are mapping, localization and navigation. The goal of this thesis is to develop a new approach to solve the problems of mapping, localization, and navigation for autonomous robots in agriculture. This type of environment poses some unique challenges such as repetitive patterns, large-scale sparse features environments, in comparison to other scenarios such as urban/cities, where the abundance of good features such as pavements, buildings, road lanes, traffic signs, etc., exists. In outdoor agricultural environments, a robot can rely on a Global Navigation Satellite System (GNSS) to determine its whereabouts. It is often limited to the robot's activities to accessible GNSS signal areas. It would fail for indoor environments. In this case, different types of exteroceptive sensors such as (RGB, Depth, Thermal) cameras, laser scanner, Light Detection and Ranging (LiDAR) and proprioceptive sensors such as Inertial Measurement Unit (IMU), wheel-encoders can be fused to better estimate the robot's states. Generic approaches of combining several different sensors often yield superior estimation results but they are not always optimal in terms of cost-effectiveness, high modularity, reusability, and interchangeability. For agricultural robots, it is equally important for being robust for long term operations as well as being cost-effective for mass production. We tackle this challenge by exploring and selectively using a handful of sensors such as RGB-D cameras, LiDAR and IMU for representative agricultural environments. The sensor fusion algorithms provide high precision and robustness for mapping and localization while at the same time assuring cost-effectiveness by employing only the necessary sensors for a task at hand. In this thesis, we extend the LiDAR mapping and localization methods for normal urban/city scenarios to cope with the agricultural environments where the presence of slopes, vegetation, trees render the traditional approaches to fail. Our mapping method substantially reduces the memory footprint for map storing, which is important for large-scale farms. We show how to handle the localization problem in dynamic growing strawberry polytunnels by using only a stereo visual-inertial (VI) and depth sensor to extract and track only invariant features. This eliminates the need for remapping to deal with dynamic scenes. Also, for a demonstration of the minimalistic requirement for autonomous agricultural robots, we show the ability to autonomously traverse between rows in a difficult environment of zigzag-liked polytunnel using only a laser scanner. Furthermore, we present an autonomous navigation capability by using only a camera without explicitly performing mapping or localization. Finally, our mapping and localization methods are generic and platform-agnostic, which can be applied to different types of agricultural robots. All contributions presented in this thesis have been tested and validated on real robots in real agricultural environments. All approaches have been published or submitted in peer-reviewed conference papers and journal articles.Roboter har nylig blitt standard i mange deler av hverdagen. I hjemmet har vi støvsuger-, vaske- og gressklippende roboter. Svermer med roboter har blitt brukt av Amazons varehus i mange år. Autonome selvkjørende biler, til tross for å ha vært satt tilbake av sikkerhetshensyn, er udiskutabelt på vei til å bli standarden innen bilbransjen. Roboter har mer nytte enn rent kommersielt bruk. Roboter kan utføre forskjellige oppgaver, som å inspisere farlige områder og delta i leteoppdrag. Uansett hva sluttbrukeren velger å gjøre, spiller autonomi en viktig rolle i moderne roboter. De essensielle egenskapene for autonome operasjoner i landbruket er kartlegging, lokalisering og navigering. Denne type miljø gir spesielle utfordringer som repetitive mønstre og storskala miljø med få landskapsdetaljer, sammenlignet med andre steder, som urbane-/bymiljø, hvor det finnes mange landskapsdetaljer som fortau, bygninger, trafikkfelt, trafikkskilt, etc. I utendørs jordbruksmiljø kan en robot bruke Global Navigation Satellite System (GNSS) til å navigere sine omgivelser. Dette begrenser robotens aktiviteter til områder med tilgjengelig GNSS signaler. Dette vil ikke fungere i miljøer innendørs. I ett slikt tilfelle vil reseptorer mot det eksterne miljø som (RGB-, dybde-, temperatur-) kameraer, laserskannere, «Light detection and Ranging» (LiDAR) og propriopsjonære detektorer som treghetssensorer (IMU) og hjulenkodere kunne brukes sammen for å bedre kunne estimere robotens tilstand. Generisk kombinering av forskjellige sensorer fører til overlegne estimeringsresultater, men er ofte suboptimale med hensyn på kostnadseffektivitet, moduleringingsgrad og utbyttbarhet. For landbruksroboter så er det like viktig med robusthet for lang tids bruk som kostnadseffektivitet for masseproduksjon. Vi taklet denne utfordringen med å utforske og selektivt velge en håndfull sensorer som RGB-D kameraer, LiDAR og IMU for representative landbruksmiljø. Algoritmen som kombinerer sensorsignalene gir en høy presisjonsgrad og robusthet for kartlegging og lokalisering, og gir samtidig kostnadseffektivitet med å bare bruke de nødvendige sensorene for oppgaven som skal utføres. I denne avhandlingen utvider vi en LiDAR kartlegging og lokaliseringsmetode normalt brukt i urbane/bymiljø til å takle landbruksmiljø, hvor hellinger, vegetasjon og trær gjør at tradisjonelle metoder mislykkes. Vår metode reduserer signifikant lagringsbehovet for kartlagring, noe som er viktig for storskala gårder. Vi viser hvordan lokaliseringsproblemet i dynamisk voksende jordbær-polytuneller kan løses ved å bruke en stereo visuel inertiel (VI) og en dybdesensor for å ekstrahere statiske objekter. Dette eliminerer behovet å kartlegge på nytt for å klare dynamiske scener. I tillegg demonstrerer vi de minimalistiske kravene for autonome jordbruksroboter. Vi viser robotens evne til å bevege seg autonomt mellom rader i ett vanskelig miljø med polytuneller i sikksakk-mønstre ved bruk av kun en laserskanner. Videre presenterer vi en autonom navigeringsevne ved bruk av kun ett kamera uten å eksplisitt kartlegge eller lokalisere. Til slutt viser vi at kartleggings- og lokaliseringsmetodene er generiske og platform-agnostiske, noe som kan brukes med flere typer jordbruksroboter. Alle bidrag presentert i denne avhandlingen har blitt testet og validert med ekte roboter i ekte landbruksmiljø. Alle forsøk har blitt publisert eller sendt til fagfellevurderte konferansepapirer og journalartikler

    Cartographie, localisation et planification simultanées ‘en ligne’, à long terme et à grande échelle pour robot mobile

    Get PDF
    Pour être en mesure de naviguer dans des endroits inconnus et non structurés, un robot doit pouvoir cartographier l’environnement afin de s’y localiser. Ce problème est connu sous le nom de cartographie et localisation simultanées (ou SLAM pour Simultaneous Localization and Mapping). Une fois la carte de l’environnement créée, des tâches requérant un déplacement d’un endroit connu à un autre peuvent ainsi être planifiées. La charge de calcul du SLAM est dépendante de la grandeur de la carte. Un robot a une puissance de calcul embarquée limitée pour arriver à traiter l’information ‘en ligne’, c’est-à-dire à bord du robot avec un temps de traitement des données moins long que le temps d’acquisition des données ou le temps maximal permis de mise à jour de la carte. La navigation du robot tout en faisant le SLAM est donc limitée par la taille de l’environnement à cartographier. Pour résoudre cette problématique, l’objectif est de développer un algorithme de SPLAM (Simultaneous Planning Localization and Mapping) permettant la navigation peu importe la taille de l’environment. Pour gérer efficacement la charge de calcul de cet algorithme, la mémoire du robot est divisée en une mémoire de travail et une mémoire à long terme. Lorsque la contrainte de traitement ‘en ligne’ est atteinte, les endroits vus les moins souvent et qui ne sont pas utiles pour la navigation sont transférées de la mémoire de travail à la mémoire à long terme. Les endroits transférés dans la mémoire à long terme ne sont plus utilisés pour la navigation. Cependant, ces endroits transférés peuvent être récupérées de la mémoire à long terme à la mémoire de travail lorsque le le robot s’approche d’un endroit voisin encore dans la mémoire de travail. Le robot peut ainsi se rappeler incrémentalement d’une partie de l’environment a priori oubliée afin de pouvoir s’y localiser pour le suivi de trajectoire. L’algorithme, nommé RTAB-Map, a été testé sur le robot AZIMUT-3 dans une première expérience de cartographie sur cinq sessions indépendantes, afin d’évaluer la capacité du système à fusionner plusieurs cartes ‘en ligne’. La seconde expérience, avec le même robot utilisé lors de onze sessions totalisant 8 heures de déplacement, a permis d’évaluer la capacité du robot de naviguer de façon autonome tout en faisant du SLAM et planifier des trajectoires continuellement sur une longue période en respectant la contrainte de traitement ‘en ligne’ . Enfin, RTAB-Map est comparé à d’autres systèmes de SLAM sur quatre ensembles de données populaires pour des applications de voiture autonome (KITTI), balayage à la main avec une caméra RGB-D (TUM RGB-D), de drone (EuRoC) et de navigation intérieur avec un robot PR2 (MIT Stata Center). Les résultats montrent que RTAB-Map peut être utilisé sur de longue période de temps en navigation autonome tout en respectant la contrainte de traitement ‘en ligne’ et avec une qualité de carte comparable aux approches de l’état de l’art en SLAM visuel et avec télémètre laser. ll en résulte d’un logiciel libre déployé dans une multitude d’applications allant des robots mobiles intérieurs peu coûteux aux voitures autonomes, en passant par les drones et la modélisation 3D de l’intérieur d’une maison
    corecore