3,145 research outputs found
Where Should We Place LiDARs on the Autonomous Vehicle? - An Optimal Design Approach
Autonomous vehicle manufacturers recognize that LiDAR provides accurate 3D
views and precise distance measures under highly uncertain driving conditions.
Its practical implementation, however, remains costly. This paper investigates
the optimal LiDAR configuration problem to achieve utility maximization. We use
the perception area and non-detectable subspace to construct the design
procedure as solving a min-max optimization problem and propose a bio-inspired
measure -- volume to surface area ratio (VSR) -- as an easy-to-evaluate cost
function representing the notion of the size of the non-detectable subspaces of
a given configuration. We then adopt a cuboid-based approach to show that the
proposed VSR-based measure is a well-suited proxy for object detection rate. It
is found that the Artificial Bee Colony evolutionary algorithm yields a
tractable cost function computation. Our experiments highlight the
effectiveness of our proposed VSR measure in terms of cost-effectiveness
configuration as well as providing insightful analyses that can improve the
design of AV systems.Comment: 7 pages including the references, accepted by International
Conference on Robotics and Automation (ICRA), 201
Multi-Lane Perception Using Feature Fusion Based on GraphSLAM
An extensive, precise and robust recognition and modeling of the environment
is a key factor for next generations of Advanced Driver Assistance Systems and
development of autonomous vehicles. In this paper, a real-time approach for the
perception of multiple lanes on highways is proposed. Lane markings detected by
camera systems and observations of other traffic participants provide the input
data for the algorithm. The information is accumulated and fused using
GraphSLAM and the result constitutes the basis for a multilane clothoid model.
To allow incorporation of additional information sources, input data is
processed in a generic format. Evaluation of the method is performed by
comparing real data, collected with an experimental vehicle on highways, to a
ground truth map. The results show that ego and adjacent lanes are robustly
detected with high quality up to a distance of 120 m. In comparison to serial
lane detection, an increase in the detection range of the ego lane and a
continuous perception of neighboring lanes is achieved. The method can
potentially be utilized for the longitudinal and lateral control of
self-driving vehicles
Observability analysis and optimal sensor placement in stereo radar odometry
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Localization is the key perceptual process closing the loop of autonomous navigation, allowing self-driving vehicles to operate in a deliberate way. To ensure robust localization, autonomous vehicles have to implement redundant estimation processes, ideally independent in terms of the underlying physics behind sensing principles. This paper presents a stereo radar odometry system, which can be used as such a redundant system, complementary to other odometry estimation processes, providing robustness for long-term operability. The presented work is novel with respect to previously published methods in that it contains: (i) a detailed formulation of the Doppler error and its associated uncertainty; (ii) an observability analysis that gives the minimal conditions to infer a 2D twist from radar readings; and (iii) a numerical analysis for optimal vehicle sensor placement. Experimental results are also detailed that validate the theoretical insights.Peer ReviewedPostprint (author's final draft
Satellite Navigation for the Age of Autonomy
Global Navigation Satellite Systems (GNSS) brought navigation to the masses.
Coupled with smartphones, the blue dot in the palm of our hands has forever
changed the way we interact with the world. Looking forward, cyber-physical
systems such as self-driving cars and aerial mobility are pushing the limits of
what localization technologies including GNSS can provide. This autonomous
revolution requires a solution that supports safety-critical operation,
centimeter positioning, and cyber-security for millions of users. To meet these
demands, we propose a navigation service from Low Earth Orbiting (LEO)
satellites which deliver precision in-part through faster motion, higher power
signals for added robustness to interference, constellation autonomous
integrity monitoring for integrity, and encryption / authentication for
resistance to spoofing attacks. This paradigm is enabled by the 'New Space'
movement, where highly capable satellites and components are now built on
assembly lines and launch costs have decreased by more than tenfold. Such a
ubiquitous positioning service enables a consistent and secure standard where
trustworthy information can be validated and shared, extending the electronic
horizon from sensor line of sight to an entire city. This enables the
situational awareness needed for true safe operation to support autonomy at
scale.Comment: 11 pages, 8 figures, 2020 IEEE/ION Position, Location and Navigation
Symposium (PLANS
Investigating the Impact of Multi-LiDAR Placement on Object Detection for Autonomous Driving
The past few years have witnessed an increasing interest in improving the
perception performance of LiDARs on autonomous vehicles. While most of the
existing works focus on developing new deep learning algorithms or model
architectures, we study the problem from the physical design perspective, i.e.,
how different placements of multiple LiDARs influence the learning-based
perception. To this end, we introduce an easy-to-compute information-theoretic
surrogate metric to quantitatively and fast evaluate LiDAR placement for 3D
detection of different types of objects. We also present a new data collection,
detection model training and evaluation framework in the realistic CARLA
simulator to evaluate disparate multi-LiDAR configurations. Using several
prevalent placements inspired by the designs of self-driving companies, we show
the correlation between our surrogate metric and object detection performance
of different representative algorithms on KITTI through extensive experiments,
validating the effectiveness of our LiDAR placement evaluation approach. Our
results show that sensor placement is non-negligible in 3D point cloud-based
object detection, which will contribute up to 10% performance discrepancy in
terms of average precision in challenging 3D object detection settings. We
believe that this is one of the first studies to quantitatively investigate the
influence of LiDAR placement on perception performance. The code is available
on https://github.com/HanjiangHu/Multi-LiDAR-Placement-for-3D-Detection.Comment: CVPR 2022 camera-ready version:15 pages, 14 figures, 9 table
- …