352 research outputs found
Observability analysis and optimal sensor placement in stereo radar odometry
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Localization is the key perceptual process closing the loop of autonomous navigation, allowing self-driving vehicles to operate in a deliberate way. To ensure robust localization, autonomous vehicles have to implement redundant estimation processes, ideally independent in terms of the underlying physics behind sensing principles. This paper presents a stereo radar odometry system, which can be used as such a redundant system, complementary to other odometry estimation processes, providing robustness for long-term operability. The presented work is novel with respect to previously published methods in that it contains: (i) a detailed formulation of the Doppler error and its associated uncertainty; (ii) an observability analysis that gives the minimal conditions to infer a 2D twist from radar readings; and (iii) a numerical analysis for optimal vehicle sensor placement. Experimental results are also detailed that validate the theoretical insights.Peer ReviewedPostprint (author's final draft
Static Background Removal in Vehicular Radar: Filtering in Azimuth-Elevation-Doppler Domain
A significant challenge in autonomous driving systems lies in image
understanding within complex environments, particularly dense traffic
scenarios. An effective solution to this challenge involves removing the
background or static objects from the scene, so as to enhance the detection of
moving targets as key component of improving overall system performance. In
this paper, we present an efficient algorithm for background removal in
automotive radar applications, specifically utilizing a frequency-modulated
continuous wave (FMCW) radar. Our proposed algorithm follows a three-step
approach, encompassing radar signal preprocessing, three-dimensional (3D)
ego-motion estimation, and notch filter-based background removal in the
azimuth-elevation-Doppler domain. To begin, we model the received signal of the
FMCW multiple-input multiple-output (MIMO) radar and develop a signal
processing framework for extracting four-dimensional (4D) point clouds.
Subsequently, we introduce a robust 3D ego-motion estimation algorithm that
accurately estimates radar ego-motion speed, accounting for Doppler ambiguity,
by processing the point clouds. Additionally, our algorithm leverages the
relationship between Doppler velocity, azimuth angle, elevation angle, and
radar ego-motion speed to identify the spectrum belonging to background
clutter. Subsequently, we employ notch filters to effectively filter out the
background clutter. The performance of our algorithm is evaluated using both
simulated data and extensive experiments with real-world data. The results
demonstrate its effectiveness in efficiently removing background clutter and
enhacing perception within complex environments. By offering a fast and
computationally efficient solution, our approach effectively addresses
challenges posed by non-homogeneous environments and real-time processing
requirements
Doppler-only Single-scan 3D Vehicle Odometry
We present a novel 3D odometry method that recovers the full motion of a
vehicle only from a Doppler-capable range sensor. It leverages the radial
velocities measured from the scene, estimating the sensor's velocity from a
single scan. The vehicle's 3D motion, defined by its linear and angular
velocities, is calculated taking into consideration its kinematic model which
provides a constraint between the velocity measured at the sensor frame and the
vehicle frame.
Experiments carried out prove the viability of our single-sensor method
compared to mounting an additional IMU. Our method provides the translation of
the sensor, which cannot be reliably determined from an IMU, as well as its
rotation. Its short-term accuracy and fast operation (~5ms) make it a proper
candidate to supply the initialization to more complex localization algorithms
or mapping pipelines. Not only does it reduce the error of the mapper, but it
does so at a comparable level of accuracy as an IMU would. All without the need
to mount and calibrate an extra sensor on the vehicle.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
ROAMER: Robust Offroad Autonomy using Multimodal State Estimation with Radar Velocity Integration
Reliable offroad autonomy requires low-latency, high-accuracy state estimates
of pose as well as velocity, which remain viable throughout environments with
sub-optimal operating conditions for the utilized perception modalities. As
state estimation remains a single point of failure system in the majority of
aspiring autonomous systems, failing to address the environmental degradation
the perception sensors could potentially experience given the operating
conditions, can be a mission-critical shortcoming. In this work, a method for
integration of radar velocity information in a LiDAR-inertial odometry solution
is proposed, enabling consistent estimation performance even with degraded
LiDAR-inertial odometry. The proposed method utilizes the direct
velocity-measuring capabilities of an Frequency Modulated Continuous Wave
(FMCW) radar sensor to enhance the LiDAR-inertial smoother solution onboard the
vehicle through integration of the forward velocity measurement into the
graph-based smoother. This leads to increased robustness in the overall
estimation solution, even in the absence of LiDAR data. This method was
validated by hardware experiments conducted onboard an all-terrain vehicle
traveling at high speed, ~12 m/s, in demanding offroad environments
Self-Supervised Velocity Estimation for Automotive Radar Object Detection Networks
This paper presents a method to learn the Cartesian velocity of objects using
an object detection network on automotive radar data. The proposed method is
self-supervised in terms of generating its own training signal for the
velocities. Labels are only required for single-frame, oriented bounding boxes
(OBBs). Labels for the Cartesian velocities or contiguous sequences, which are
expensive to obtain, are not required. The general idea is to pre-train an
object detection network without velocities using single-frame OBB labels, and
then exploit the network's OBB predictions on unlabelled data for velocity
training. In detail, the network's OBB predictions of the unlabelled frames are
updated to the timestamp of a labelled frame using the predicted velocities and
the distances between the updated OBBs of the unlabelled frame and the OBB
predictions of the labelled frame are used to generate a self-supervised
training signal for the velocities. The detection network architecture is
extended by a module to account for the temporal relation of multiple scans and
a module to represent the radars' radial velocity measurements explicitly. A
two-step approach of first training only OBB detection, followed by training
OBB detection and velocities is used. Further, a pre-training with
pseudo-labels generated from radar radial velocity measurements bootstraps the
self-supervised method of this paper. Experiments on the publicly available
nuScenes dataset show that the proposed method almost reaches the velocity
estimation performance of a fully supervised training, but does not require
expensive velocity labels. Furthermore, we outperform a baseline method which
uses only radial velocity measurements as labels.Comment: Accepted for presentation at the 2022 33rd IEEE Intelligent Vehicles
Symposium (IV) (IV 2022), June 5-9, 2022, in Aachen, German
- …