11 research outputs found
Synchronisation et calibrage entre un Lidar 3D et une centrale inertielle pour la localisation précise d'un véhicule autonome
International audienceLaser remote sensing (Lidar) is a technology increasingly used especially in the perception layers of autonomous vehicles. As the vehicle moves during measurement, Lidar data must be referenced in a fixed frame which is usually done thanks to an inertial measurement unit (IMU). However, these sensors are not designed to work together natively thus it is necessary to synchronize and calibrate them carefully. This article presents a method for characterizing timing offsets between a 3D Lidar and an inertial measurement unit. It also explains how to implement the usual methods for pose estimation between an IMU and a Lidar when using such sensors in real conditions.La télédétection par laser (Lidar) est une technologie de plus en plus utilisée en particulier dans les fonctions de perception et localisation nécessaires à la conduite autonome. L'acquisition des données Lidar doit être couplée à la mesure du mouvement du véhicule par une centrale inertielle. Ces capteurs n'étant pas conçus pour fonctionner ensemble nativement, il est nécessaire de maitriser leur synchronisation et leur calibrage géométrique. Cet article présente une méthode pour caractériser les décalages temporels entre un Lidar 3D et une centrale inertielle. Il explique aussi comment mettre en œuvre les méthodes de la littérature pour le calcul de la pose entre centrale inertielle et Lidar sur un véhicule utilisé en conditions réelles
Particle filter meets hybrid octrees: an octree-based ground vehicle localization approach without learning
International audienc
Synchronisation et calibrage entre un Lidar 3D et une centrale inertielle pour la localisation précise d'un véhicule autonome GEOLOCALISATION ET NAVIGATION Synchronisation et calibrage entre un Lidar 3D et une centrale inertielle pour la localisation precise d'un véhicule autonome Synchronization and calibration between a 3D Lidar and an inertial measurement unit for the accurate localization of an autonomous vehicle
Synchronisation et calibrage entre un Lidar 3D et une centrale inertielle pour la localisation précise d'un véhicule autonom
Synchronisation et calibrage entre un Lidar 3D et une centrale inertielle pour la localisation précise d'un véhicule autonome
International audienc
Synchronisation et calibrage entre un Lidar 3D et une centrale inertielle pour la localisation précise d'un véhicule autonome GEOLOCALISATION ET NAVIGATION Synchronisation et calibrage entre un Lidar 3D et une centrale inertielle pour la localisation precise d'un véhicule autonome Synchronization and calibration between a 3D Lidar and an inertial measurement unit for the accurate localization of an autonomous vehicle
Synchronisation et calibrage entre un Lidar 3D et une centrale inertielle pour la localisation précise d'un véhicule autonomeLa télédétection par laser (Lidar) est une technologie de plus en plus utilisée en particulier dans les fonctions de perception et localisation nécessaires à la conduite autonome. L'acquisition des données Lidar doit être couplée à la mesure du mouvement du véhicule par une centrale inertielle. Ces capteurs n'étant pas conçus pour fonctionner ensemble nativement, il est nécessaire de maitriser leur synchronisation et leur calibrage géométrique. Cet article présente une méthode pour caractériser les décalages temporels entre un Lidar 3D et une centrale inertielle. Il explique aussi comment mettre en oeuvre les méthodes de la littérature pour le calcul de la pose entre centrale inertielle et Lidar sur un véhicule utilisé en conditions réelles. Abstract: Laser remote sensing (Lidar) is a technology increasingly used especially in the perception layers of autonomous vehicles. As the vehicle moves during measurement, Lidar data must be referenced in a fixed frame which is usually done thanks to an inertial measurement unit (IMU). However, these sensors are not designed to work together natively thus it is necessary to synchronize and calibrate them carefully. This article presents a method for characterizing timing offsets between a 3D Lidar and an inertial measurement unit. It also explains how to implement the usual methods for pose estimation between an IMU and a Lidar when using such sensors in real conditions
Road and Railway Smart Mobility: A High-Definition Ground Truth Hybrid Dataset
International audienceA robust visual understanding of complex urban environments using passive optical sensors is an onerous and essential task for autonomous navigation. The problem is heavily characterized by the quality of the available dataset and the number of instances it includes. Regardless of the benchmark results of perception algorithms, a model would only be reliable and capable of enhanced decision making if the dataset covers the exact domain of the end-use case. For this purpose, in order to improve the level of instances in datasets used for the training and validation of Autonomous Vehicles (AV), Advanced Driver Assistance Systems (ADAS), and autonomous driving, and to reduce the void due to the no-existence of any datasets in the context of railway smart mobility, we introduce our multimodal hybrid dataset called ESRORAD. ESRORAD is comprised of 34 videos, 2.7 k virtual images, and 100 k real images for both road and railway scenes collected in two Normandy towns, Rouen and Le Havre. All the images are annotated with 3D bounding boxes showing at least three different classes of persons, cars, and bicycles. Crucially, our dataset is the first of its kind with uncompromised efforts on being the best in terms of large volume, abundance in annotation, and diversity in scenes. Our escorting study provides an in-depth analysis of the dataset’s characteristics as well as a performance evaluation with various state-of-the-art models trained under other popular datasets, namely, KITTI and NUScenes. Some examples of image annotations and the prediction results of our 3D object detection lightweight algorithms are available in ESRORAD dataset. Finally, the dataset is available online. This repository consists of 52 datasets with their respective annotations performed
VIKINGS: An Autonomous Inspection Robot for the ARGOS Challenge
International audienc
The VIKINGS Autonomous Inspection Robot: Competing in the ARGOS Challenge
International audienceThis paper presents the overall architecture of the VIKINGS robot, one of the five contenders in the ARGOS challenge and winner of two competitions. The VIKINGS robot is an autonomous or remote-operated robot for the inspection of oil and gas sites and is able to assess various petrochemical risks based on embedded sensors and processing. As described in this article, our robot is able to autonomously monitor all the elements of a petrochemical process on a multi-storey oil platform (reading gauges, state of the valves, proper functioning of the pumps) while facing many hazards (leaks, obstacles or holes in its path). The aim of this article is to present the major components of our robot's architecture and the algorithms we developed for certain functions (localization, gauge reading, etc). We also present the methodology that we adopted and that allowed us to succeed in this challenge
Road and Railway Smart Mobility: A High-Definition Ground Truth Hybrid Dataset
A robust visual understanding of complex urban environments using passive optical sensors is an onerous and essential task for autonomous navigation. The problem is heavily characterized by the quality of the available dataset and the number of instances it includes. Regardless of the benchmark results of perception algorithms, a model would only be reliable and capable of enhanced decision making if the dataset covers the exact domain of the end-use case. For this purpose, in order to improve the level of instances in datasets used for the training and validation of Autonomous Vehicles (AV), Advanced Driver Assistance Systems (ADAS), and autonomous driving, and to reduce the void due to the no-existence of any datasets in the context of railway smart mobility, we introduce our multimodal hybrid dataset called ESRORAD. ESRORAD is comprised of 34 videos, 2.7 k virtual images, and 100 k real images for both road and railway scenes collected in two Normandy towns, Rouen and Le Havre. All the images are annotated with 3D bounding boxes showing at least three different classes of persons, cars, and bicycles. Crucially, our dataset is the first of its kind with uncompromised efforts on being the best in terms of large volume, abundance in annotation, and diversity in scenes. Our escorting study provides an in-depth analysis of the dataset’s characteristics as well as a performance evaluation with various state-of-the-art models trained under other popular datasets, namely, KITTI and NUScenes. Some examples of image annotations and the prediction results of our 3D object detection lightweight algorithms are available in ESRORAD dataset. Finally, the dataset is available online. This repository consists of 52 datasets with their respective annotations performed