170 research outputs found
Observability analysis and optimal sensor placement in stereo radar odometry
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Localization is the key perceptual process closing the loop of autonomous navigation, allowing self-driving vehicles to operate in a deliberate way. To ensure robust localization, autonomous vehicles have to implement redundant estimation processes, ideally independent in terms of the underlying physics behind sensing principles. This paper presents a stereo radar odometry system, which can be used as such a redundant system, complementary to other odometry estimation processes, providing robustness for long-term operability. The presented work is novel with respect to previously published methods in that it contains: (i) a detailed formulation of the Doppler error and its associated uncertainty; (ii) an observability analysis that gives the minimal conditions to infer a 2D twist from radar readings; and (iii) a numerical analysis for optimal vehicle sensor placement. Experimental results are also detailed that validate the theoretical insights.Peer ReviewedPostprint (author's final draft
Static Background Removal in Vehicular Radar: Filtering in Azimuth-Elevation-Doppler Domain
A significant challenge in autonomous driving systems lies in image
understanding within complex environments, particularly dense traffic
scenarios. An effective solution to this challenge involves removing the
background or static objects from the scene, so as to enhance the detection of
moving targets as key component of improving overall system performance. In
this paper, we present an efficient algorithm for background removal in
automotive radar applications, specifically utilizing a frequency-modulated
continuous wave (FMCW) radar. Our proposed algorithm follows a three-step
approach, encompassing radar signal preprocessing, three-dimensional (3D)
ego-motion estimation, and notch filter-based background removal in the
azimuth-elevation-Doppler domain. To begin, we model the received signal of the
FMCW multiple-input multiple-output (MIMO) radar and develop a signal
processing framework for extracting four-dimensional (4D) point clouds.
Subsequently, we introduce a robust 3D ego-motion estimation algorithm that
accurately estimates radar ego-motion speed, accounting for Doppler ambiguity,
by processing the point clouds. Additionally, our algorithm leverages the
relationship between Doppler velocity, azimuth angle, elevation angle, and
radar ego-motion speed to identify the spectrum belonging to background
clutter. Subsequently, we employ notch filters to effectively filter out the
background clutter. The performance of our algorithm is evaluated using both
simulated data and extensive experiments with real-world data. The results
demonstrate its effectiveness in efficiently removing background clutter and
enhacing perception within complex environments. By offering a fast and
computationally efficient solution, our approach effectively addresses
challenges posed by non-homogeneous environments and real-time processing
requirements
Doppler-only Single-scan 3D Vehicle Odometry
We present a novel 3D odometry method that recovers the full motion of a
vehicle only from a Doppler-capable range sensor. It leverages the radial
velocities measured from the scene, estimating the sensor's velocity from a
single scan. The vehicle's 3D motion, defined by its linear and angular
velocities, is calculated taking into consideration its kinematic model which
provides a constraint between the velocity measured at the sensor frame and the
vehicle frame.
Experiments carried out prove the viability of our single-sensor method
compared to mounting an additional IMU. Our method provides the translation of
the sensor, which cannot be reliably determined from an IMU, as well as its
rotation. Its short-term accuracy and fast operation (~5ms) make it a proper
candidate to supply the initialization to more complex localization algorithms
or mapping pipelines. Not only does it reduce the error of the mapper, but it
does so at a comparable level of accuracy as an IMU would. All without the need
to mount and calibrate an extra sensor on the vehicle.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
Radars for Autonomous Driving: A Review of Deep Learning Methods and Challenges
Radar is a key component of the suite of perception sensors used for safe and
reliable navigation of autonomous vehicles. Its unique capabilities include
high-resolution velocity imaging, detection of agents in occlusion and over
long ranges, and robust performance in adverse weather conditions. However, the
usage of radar data presents some challenges: it is characterized by low
resolution, sparsity, clutter, high uncertainty, and lack of good datasets.
These challenges have limited radar deep learning research. As a result,
current radar models are often influenced by lidar and vision models, which are
focused on optical features that are relatively weak in radar data, thus
resulting in under-utilization of radar's capabilities and diminishing its
contribution to autonomous perception. This review seeks to encourage further
deep learning research on autonomous radar data by 1) identifying key research
themes, and 2) offering a comprehensive overview of current opportunities and
challenges in the field. Topics covered include early and late fusion,
occupancy flow estimation, uncertainty modeling, and multipath detection. The
paper also discusses radar fundamentals and data representation, presents a
curated list of recent radar datasets, and reviews state-of-the-art lidar and
vision models relevant for radar research. For a summary of the paper and more
results, visit the website: autonomous-radars.github.io
Extrinsic Calibration of 2D Millimetre-Wavelength Radar Pairs Using Ego-Velocity Estimates
Correct radar data fusion depends on knowledge of the spatial transform
between sensor pairs. Current methods for determining this transform operate by
aligning identifiable features in different radar scans, or by relying on
measurements from another, more accurate sensor. Feature-based alignment
requires the sensors to have overlapping fields of view or necessitates the
construction of an environment map. Several existing techniques require bespoke
retroreflective radar targets. These requirements limit both where and how
calibration can be performed. In this paper, we take a different approach:
instead of attempting to track targets or features, we rely on ego-velocity
estimates from each radar to perform calibration. Our method enables
calibration of a subset of the transform parameters, including the yaw and the
axis of translation between the radar pair, without the need for a shared field
of view or for specialized targets. In general, the yaw and the axis of
translation are the most important parameters for data fusion, the most likely
to vary over time, and the most difficult to calibrate manually. We formulate
calibration as a batch optimization problem, show that the radar-radar system
is identifiable, and specify the platform excitation requirements. Through
simulation studies and real-world experiments, we establish that our method is
more reliable and accurate than state-of-the-art methods. Finally, we
demonstrate that the full rigid body transform can be recovered if relatively
coarse information about the platform rotation rate is available.Comment: Accepted to the 2023 IEEE/ASME International Conference on Advanced
Intelligent Mechatronics (AIM 2023), Seattle, Washington, USA, June 27- July
1, 202
- …