9 research outputs found
LiDAR based relative pose and covariance estimation for communicating vehicles exchanging a polygonal model of their shape
International audienc
Practical Collaborative Perception: A Framework for Asynchronous and Multi-Agent 3D Object Detection
In this paper, we improve the single-vehicle 3D object detection models using
LiDAR by extending their capacity to process point cloud sequences instead of
individual point clouds. In this step, we extend our previous work on
rectification of the shadow effect in the concatenation of point clouds to
boost the detection accuracy of multi-frame detection models. Our extension
includes incorporating HD Map and distilling an Oracle model. Next, we further
increase the performance of single-vehicle perception using multi-agent
collaboration via Vehicle-to-everything (V2X) communication. We devise a simple
yet effective collaboration method that achieves better bandwidth-performance
tradeoffs than prior arts while minimizing changes made to single-vehicle
detection models and assumptions on inter-agent synchronization. Experiments on
the V2X-Sim dataset show that our collaboration method achieves 98% performance
of the early collaboration while consuming the equivalent amount of bandwidth
usage of late collaboration which is 0.03% of early collaboration. The code
will be released at https://github.com/quan-dao/practical-collab-perception.Comment: Work in progres
Consistent decentralized cooperative localization for autonomous vehicles using LiDAR, GNSS, and HD maps
International audienceTo navigate autonomously, a vehicle must be able to localize itself with respect to its driving environment and the vehicles with which it interacts. This work presents a decentralized cooperative localization method. It is based on the exchange of Local Dynamic Maps (LDM), which are cyberphysical representations of the physical driving environment containing poses and kinematic information about nearby vehicles. An LDM acts as an abstraction layer that makes the cooperation framework sensor-agnostic, and it can even improve the localization of a sensorless communicating vehicle. With this goal in mind, this work focuses on the property of consistency in LDM estimates. Uncertainty in the estimates needs to be properly modeled, so that the estimation error can be statistically bounded for a given confidence level. To obtain a consistent system, we first introduce a decentralized fusion framework that can cope with LDMs whose errors have an unknown degree of correlation. Second, we present a consistent method for estimating the relative pose between vehicles, using a 2D LiDAR with a point-to-line metric within an iterative-closest-point approach, combined with communicated polygonal shape models. Finally, we add a bias estimator in order to reduce position errors when non-differential GNSS receivers are used, based on visual observations of features geo-referenced in a High-Definition (HD) map. Real experiments were conducted, and the consistency of our approach was demonstrated on a platooning scenario using two experimental vehicles. The full experimental dataset used in this work is publicly available.Pour naviguer de manière autonome, un véhicule doit être capable de se localiser par rapport à son environnement et par rapport aux véhicules avec lesquels il interagit. Ce travail présente une méthode de localisation coopérative décentralisée. Il est basé sur l'échange de cartes locales dynamiques (CLD), qui sont des représentations cyberphysiques de l'environnement de conduite physique contenant des poses et des informations cinématiques sur les véhicules à proximité. Une CLD agit comme une couche d'abstraction qui rend la coopération indépendante des capteurs. Elle peut de plus améliorer la localisation d'un véhicule communicant sans capteur. Avec cet objectif en tête, ce travail se concentre sur la consistence des estimations des CLD. L'incertitude dans les estimations doit être correctement modélisée, afin que l'erreur d'estimation puisse être statistiquement limitée pour un niveau de confiance donné. Pour obtenir un système consistant, nous introduisons d'abord une fusion décentralisée qui peut faire face aux CLD dont les erreurs ont un degré de corrélation inconnu. Ensuite, nous présentons une méthode consistante pour estimer la pose relative entre les véhicules, en utilisant un LiDAR 2D avec une méthode ICP (Iterative Closest Point) basée sur une correspondance point à ligne, combinée à des modèles polygonaux communiqués. Enfin, nous ajoutons un estimateur de biais afin de réduire les erreurs de position lorsque des récepteurs GNSS non différentiels sont utilisés, sur la base d'observations visuelles de marquages géoréférencées dans une carte haute définition (HD). Des expériences réelles ont été menées, et la consistence de notre approche a été démontrée sur un scénario de conduite en convoi utilisant deux véhicules expérimentaux. L'ensemble des données expérimentales utilisées dans ce travail a été rendu public
Localisation coopérative décentralisée pour les véhicules autonomes à partir de LiDAR, récepteurs GNSS et carte HD
International audienceTo navigate autonomously, a vehicle must be able to localize itself with respect to its driving environment and the vehicles with which it interacts. This work presents a decentralized cooperative localization method. It is based on the exchange of Local Dynamic Maps (LDM), which are cyberphysical representations of the physical driving environment containing poses and kinematic information about nearby vehicles. An LDM acts as an abstraction layer that makes the cooperation framework sensor-agnostic, and it can even improve the localization of a sensorless communicating vehicle. With this goal in mind, this work focuses on the property of consistency in LDM estimates. Uncertainty in the estimates needs to be properly modeled, so that the estimation error can be statistically bounded for a given confidence level. To obtain a consistent system, we first introduce a decentralized fusion framework that can cope with LDMs whose errors have an unknown degree of correlation. Second, we present a consistent method for estimating the relative pose between vehicles, using a 2D LiDAR with a point-to-line metric within an iterative-closest-point approach, combined with communicated polygonal shape models. Finally, we add a bias estimator in order to reduce position errors when non-differential GNSS receivers are used, based on visual observations of features geo-referenced in a High-Definition (HD) map. Real experiments were conducted, and the consistency of our approach was demonstrated on a platooning scenario using two experimental vehicles. The full experimental dataset used in this work is publicly available.Pour naviguer de manière autonome, un véhicule doit être capable de se localiser par rapport à son environnement et par rapport aux véhicules avec lesquels il interagit. Ce travail présente une méthode de localisation coopérative décentralisée. Il est basé sur l'échange de cartes locales dynamiques (CLD), qui sont des représentations cyberphysiques de l'environnement de conduite physique contenant des poses et des informations cinématiques sur les véhicules à proximité. Une CLD agit comme une couche d'abstraction qui rend la coopération indépendante des capteurs. Elle peut de plus améliorer la localisation d'un véhicule communicant sans capteur. Avec cet objectif en tête, ce travail se concentre sur la consistence des estimations des CLD. L'incertitude dans les estimations doit être correctement modélisée, afin que l'erreur d'estimation puisse être statistiquement limitée pour un niveau de confiance donné. Pour obtenir un système consistant, nous introduisons d'abord une fusion décentralisée qui peut faire face aux CLD dont les erreurs ont un degré de corrélation inconnu. Ensuite, nous présentons une méthode consistante pour estimer la pose relative entre les véhicules, en utilisant un LiDAR 2D avec une méthode ICP (Iterative Closest Point) basée sur une correspondance point à ligne, combinée à des modèles polygonaux communiqués. Enfin, nous ajoutons un estimateur de biais afin de réduire les erreurs de position lorsque des récepteurs GNSS non différentiels sont utilisés, sur la base d'observations visuelles de marquages géoréférencées dans une carte haute définition (HD). Des expériences réelles ont été menées, et la consistence de notre approche a été démontrée sur un scénario de conduite en convoi utilisant deux véhicules expérimentaux. L'ensemble des données expérimentales utilisées dans ce travail a été rendu public
Distributed asynchronous cooperative localization with inaccurate GNSS positions
International audienc
Pose and covariance matrix propagation issues in cooperative localization with LiDAR perception
International audienceThis work describes a cooperative pose estimation solution where several vehicles can perceive each other and share a geometrical model of their shape via wireless communication. We describe two formulations of the cooperation. In one case, a vehicle estimates its global pose from the one of a neighbor vehicle by localizing it in its body frame. In the other case, a vehicle uses its own pose and its perception to help localizing another one. An iterative minimization approach is used to compute the relative pose between the two vehicles by using a LiDAR-based perception method and a shared polygonal geometric model of the vehicles. This study shows how to obtain an observation of the pose of one vehicle given the perception and the pose communicated by another one without any filtering to properly characterize the cooperative problem independently of any other sensor. Accuracy and consistency of the proposed approaches are evaluated on real data from on-road experiments. It is shown that this kind of strategy for cooperative pose estimation can be accurate. We also analyze the advantages and drawbacks of the two approaches on a simple case study
Aligning Bird-Eye View Representation of Point Cloud Sequences using Scene Flow
International audienceLow-resolution point clouds are challenging for object detection methods due to their sparsity. Densifying the present point cloud by concatenating it with its predecessors is a popular solution to this challenge. Such concatenation is possible thanks to the removal of ego vehicle motion using its odometry. This method is called Ego Motion Compensation (EMC). Thanks to the added points, EMC significantly improves the performance of single-frame detectors. However, it suffers from the shadow effect that manifests in dynamic objects' points scattering along their trajectories. This effect results in a misalignment between feature maps and objects' locations, thus limiting performance improvement to stationary and slow-moving objects only. Scene flow allows aligning point clouds in 3D space, thus naturally resolving the misalignment in feature spaces. By observing that scene flow computation shares several components with 3D object detection pipelines, we develop a plug-in module that enables single-frame detectors to compute scene flow to rectify their Bird-Eye View representation. Experiments on the NuScenes dataset show that our module leads to a significant increase (up to 16%) in the Average Precision of large vehicles, which interestingly demonstrates the most severe shadow effect
Attention-based Proposals Refinement for 3D Object Detection
Recent advances in 3D object detection are made by developing the refinement
stage for voxel-based Region Proposal Networks (RPN) to better strike the
balance between accuracy and efficiency. A popular approach among
state-of-the-art frameworks is to divide proposals, or Regions of Interest
(ROI), into grids and extract features for each grid location before
synthesizing them to form ROI features. While achieving impressive
performances, such an approach involves several hand-crafted components (e.g.
grid sampling, set abstraction) which requires expert knowledge to be tuned
correctly. This paper proposes a data-driven approach to ROI feature computing
named APRO3D-Net which consists of a voxel-based RPN and a refinement stage
made of Vector Attention. Unlike the original multi-head attention, Vector
Attention assigns different weights to different channels within a point
feature, thus being able to capture a more sophisticated relation between
pooled points and ROI. Our method achieves a competitive performance of 84.85
AP for class Car at moderate difficulty on the validation set of KITTI and
47.03 mAP (average over 10 classes) on NuScenes while having the least
parameters compared to closely related methods and attaining an inference speed
at 15 FPS on NVIDIA V100 GPU. The code is released at
https://github.com/quan-dao/APRO3D-Net.Comment: Accepted for IV 202
System Architecture of a Driverless Electric Car in the Grand Cooperative Driving Challenge
International audienceThis paper presents the complete system architecture of a connected driverless electric car designed to participate in the Grand Cooperative Driving Challenge 2016. One of the main goals of this challenge was to demonstrate the feasibility of multiple autonomous vehicles cooperating via wireless communications on public roads. Several complex cooperative scenarios were considered, including the merging of two lanes and cooperation at an intersection. We describe in some detail an implementation using the open-source PACPUS framework that successfully completed the different tasks in the challenge. Our description covers localization, mapping, perception, control, communication and the human-machine interface. Some experimental results recorded in real-time during the challenge are reported