28 research outputs found

    Living in a Material World: Learning Material Properties from Full-Waveform Flash Lidar Data for Semantic Segmentation

    Full text link
    Advances in lidar technology have made the collection of 3D point clouds fast and easy. While most lidar sensors return per-point intensity (or reflectance) values along with range measurements, flash lidar sensors are able to provide information about the shape of the return pulse. The shape of the return waveform is affected by many factors, including the distance that the light pulse travels and the angle of incidence with a surface. Importantly, the shape of the return waveform also depends on the material properties of the reflecting surface. In this paper, we investigate whether the material type or class can be determined from the full-waveform response. First, as a proof of concept, we demonstrate that the extra information about material class, if known accurately, can improve performance on scene understanding tasks such as semantic segmentation. Next, we learn two different full-waveform material classifiers: a random forest classifier and a temporal convolutional neural network (TCN) classifier. We find that, in some cases, material types can be distinguished, and that the TCN generally performs better across a wider range of materials. However, factors such as angle of incidence, material colour, and material similarity may hinder overall performance.Comment: In Proceedings of the Conference on Robots and Vision (CRV'23), Montreal, Canada, Jun. 6-8, 202

    Synchronisation et calibrage entre un Lidar 3D et une centrale inertielle pour la localisation précise d'un véhicule autonome

    Get PDF
    International audienceLaser remote sensing (Lidar) is a technology increasingly used especially in the perception layers of autonomous vehicles. As the vehicle moves during measurement, Lidar data must be referenced in a fixed frame which is usually done thanks to an inertial measurement unit (IMU). However, these sensors are not designed to work together natively thus it is necessary to synchronize and calibrate them carefully. This article presents a method for characterizing timing offsets between a 3D Lidar and an inertial measurement unit. It also explains how to implement the usual methods for pose estimation between an IMU and a Lidar when using such sensors in real conditions.La tĂ©lĂ©dĂ©tection par laser (Lidar) est une technologie de plus en plus utilisĂ©e en particulier dans les fonctions de perception et localisation nĂ©cessaires Ă  la conduite autonome. L'acquisition des donnĂ©es Lidar doit ĂȘtre couplĂ©e Ă  la mesure du mouvement du vĂ©hicule par une centrale inertielle. Ces capteurs n'Ă©tant pas conçus pour fonctionner ensemble nativement, il est nĂ©cessaire de maitriser leur synchronisation et leur calibrage gĂ©omĂ©trique. Cet article prĂ©sente une mĂ©thode pour caractĂ©riser les dĂ©calages temporels entre un Lidar 3D et une centrale inertielle. Il explique aussi comment mettre en Ɠuvre les mĂ©thodes de la littĂ©rature pour le calcul de la pose entre centrale inertielle et Lidar sur un vĂ©hicule utilisĂ© en conditions rĂ©elles

    Contribution à la localisation robuste embarquée pour la navigation autonome

    No full text
    There has been a great and quick development in autonomous mobile robotics over the past few years. This growth mainly concerns autonomous vehicles and service robotics. Whatever the ïŹeld of application, the localization task plays a key role in the intelligence of the mobile robots. As a result, this thesis is focused on exploring new challenges in embedding this aspect of artiïŹcial intelligence over two topics.First, we tackled the ARGOS Challenge. We have proposed a 6 Degrees of Freedom localization method based on multi-layer LiDAR data. Results show that the localization is accurate enough to perform autonomous control of the robot. Targeted applications include Oil and Gas platform patrolling. Such environments are more challenging than regular environments found in the literature. The proposed method takes into account concerns regarding CPU and memory consumption as well as embeddability. The likelihood ïŹeld concept was extended to 3D. The method was benchmarked in a lab, an industrial site with single-layer and multi-layer LiDARs and 3 different robots. Robots are being located in their environment with an average error of 2.5cm using 16% of a CPU core.Secondly, we focused on topological localization. In fact, the approach used in the ARGOS Challenge requires a small area as initial step. Our second approach enables to ïŹnd the location of a road vehicle over several square kilometers. Our method is based on sensors widely available on any modern cars (ABS and ESP). The map used is a topological representation of OpenStreet map data. Based on this information, we achieve an averageerror of 4m. This ïŹgure is close to GPS-based accuracy.Finally, our works are based on a speciïŹc experimental approach. In fact, developing new localization algorithms requires ïŹne tunings. Moreover, tests, reliability and certiïŹcations of autonomous systems are still challenging for either the scientiïŹc community or the industry actors. The proposed methodology tends to propose a solution by mixing simulation, small-scale testing and full-scale testing.Nous constatons ces derniĂšres annĂ©es un essor important de la robotique mobile autonome avec deux grands domaines d’application : la robotique des services et le vĂ©hicule autonome. Quel que soit le domaine visĂ©, la fonction de localisation est dĂ©terminante pour l’autonomie de ces futurs mobiles. A ce titre, nous avons consacrĂ© ces travaux de thĂšse Ă  l’approfondissement des problĂ©matiques de localisation Ă  travers deux sujets proposant des mĂ©thodes embarquĂ©es.Dans un premier temps, avec le challenge international de robotique Argos : nous nous sommes attelĂ©s Ă  une localisation 6 degrĂ©s de libertĂ© basĂ©e LiDAR multi-nappes, avec une prĂ©cision sufïŹsante pour assurer le contrĂŽle du robot. Les applications envisagĂ©es Ă©tant la surveillance des installations pĂ©trochimiques, l’environnement rencontrĂ© est plus complexe que ceux couramment dĂ©crits dans la littĂ©rature. Nous avons mis particuliĂšrement l’accent sur le cĂŽtĂ© embarquable, soit la consommation de ressources aussi bien en termes de mĂ©moire que de CPU de l’algorithme. Pour cela, nous avons Ă©tendule concept de likelihood ïŹeld en 3D. L’algorithme a Ă©tĂ© Ă©valuĂ© en laboratoire puis sur un site industriel, avec deux types de LiDAR et sur 3 robots. Nous atteignons une prĂ©cision d’environ 2.5cm en utilisant 16% d’un core d’un processeur actuel.Cette mĂ©thode de localisation n’autorise pas une initialisation sur une large zone. Dans un second temps, nous avons donc proposĂ© une localisation topologique de vĂ©hicules routiers permettant une convergence sur plusieurs kilomĂštres carrĂ©s. Nous n’utilisons que des capteurs prĂ©sents dans tous les vĂ©hicules (ABS et ESP) et la carte est reprĂ©sentĂ©e par un graphe contenant des donnĂ©es Open Street Map. A partir de ces informations, nous obtenons une prĂ©cision infĂ©rieure Ă  4 m, c’est-Ă -dire semblable Ă  celle d’un GPS standard.Nos travaux portent Ă©galement sur une mĂ©thodologie d’expĂ©rimentation. En effet, la mise au point d’algorithmes de localisation nĂ©cessite de nombreux ajustements. De plus, le test, la ïŹabilitĂ© puis la qualiïŹcation de ces systĂšmes autonomes demeurent de vĂ©ritables enjeux aussi bien pour la communautĂ© scientiïŹque qu’industrielle. Cette mĂ©thodologie contribue Ă  rĂ©pondre Ă  ces challenges en fusionnant le dĂ©veloppement des tests rĂ©alisĂ©s en simulation, Ă  Ă©chelle rĂ©duite et pleine Ă©chelle

    Millimeter Wave FMCW RADARs for Perception, Recognition and Localization in Automotive Applications: A Survey

    No full text
    MmWave (millimeter wave) Frequency Modulated Continuous Waves (FMCW) RADARs are sensors based on frequency-modulated electromagnetic which see their environment in 3D at a long-range. The recent introduction of millimeter-wave RADARs with frequencies from 60 GHz to 300 GHz has broadened their potential applications thanks to their improved accuracy in angle, range, and velocity. MmWave FMCW RADARs have better resolution and accuracy than narrowband and ultra-wideband (UWB) RADARs. In comparison with cameras and LiDARs, they possess several strong advantages such as long-range perception, robustness to lightning, and weather conditions while being cheaper. However, their noisy and lower-density outputs even compared to other technologies of RADARs, and their ability to measure the targets' velocities require specific algorithms tailored for them. Working principles of mmWave FMCW RADARs are presented as well as the separate ways to represent data and their applications. This paper describes algorithms and applications adapted or developed for these sensors in automotive applications. Finally, current challenges and directions for future works are presented

    Vehicle Positioning in Road Networks without GPS

    No full text
    International audienc

    IMU/LIDAR based positioning of a gangway for maintenance operations on wind farms

    No full text
    International audienc
    corecore