1,555 research outputs found
Radar-on-Lidar: metric radar localization on prior lidar maps
Radar and lidar, provided by two different range sensors, each has pros and
cons of various perception tasks on mobile robots or autonomous driving. In
this paper, a Monte Carlo system is used to localize the robot with a rotating
radar sensor on 2D lidar maps. We first train a conditional generative
adversarial network to transfer raw radar data to lidar data, and achieve
reliable radar points from generator. Then an efficient radar odometry is
included in the Monte Carlo system. Combining the initial guess from odometry,
a measurement model is proposed to match the radar data and prior lidar maps
for final 2D positioning. We demonstrate the effectiveness of the proposed
localization framework on the public multi-session dataset. The experimental
results show that our system can achieve high accuracy for long-term
localization in outdoor scenes
Doppler-aware Odometry from FMCW Scanning Radar
This work explores Doppler information from a millimetre-Wave (mm-W)
Frequency-Modulated Continuous-Wave (FMCW) scanning radar to make odometry
estimation more robust and accurate. Firstly, doppler information is added to
the scan masking process to enhance correlative scan matching. Secondly, we
train a Neural Network (NN) for regressing forward velocity directly from a
single radar scan; we fuse this estimate with the correlative scan matching
estimate and show improved robustness to bad estimates caused by challenging
environment geometries, e.g. narrow tunnels. We test our method with a novel
custom dataset which is released with this work at
https://ori.ox.ac.uk/publications/datasets.Comment: Accepted to ITSC 202
Keep off the Grass: Permissible Driving Routes from Radar with Weak Audio Supervision
Reliable outdoor deployment of mobile robots requires the robust
identification of permissible driving routes in a given environment. The
performance of LiDAR and vision-based perception systems deteriorates
significantly if certain environmental factors are present e.g. rain, fog,
darkness. Perception systems based on FMCW scanning radar maintain full
performance regardless of environmental conditions and with a longer range than
alternative sensors. Learning to segment a radar scan based on driveability in
a fully supervised manner is not feasible as labelling each radar scan on a
bin-by-bin basis is both difficult and time-consuming to do by hand. We
therefore weakly supervise the training of the radar-based classifier through
an audio-based classifier that is able to predict the terrain type underneath
the robot. By combining odometry, GPS and the terrain labels from the audio
classifier, we are able to construct a terrain labelled trajectory of the robot
in the environment which is then used to label the radar scans. Using a
curriculum learning procedure, we then train a radar segmentation network to
generalise beyond the initial labelling and to detect all permissible driving
routes in the environment.Comment: accepted for publication at the IEEE Intelligent Transportation
Systems Conference (ITSC) 202
RSS-Net: Weakly-Supervised Multi-Class Semantic Segmentation with FMCW Radar
This paper presents an efficient annotation procedure and an application
thereof to end-to-end, rich semantic segmentation of the sensed environment
using FMCW scanning radar. We advocate radar over the traditional sensors used
for this task as it operates at longer ranges and is substantially more robust
to adverse weather and illumination conditions. We avoid laborious manual
labelling by exploiting the largest radar-focused urban autonomy dataset
collected to date, correlating radar scans with RGB cameras and LiDAR sensors,
for which semantic segmentation is an already consolidated procedure. The
training procedure leverages a state-of-the-art natural image segmentation
system which is publicly available and as such, in contrast to previous
approaches, allows for the production of copious labels for the radar stream by
incorporating four camera and two LiDAR streams. Additionally, the losses are
computed taking into account labels to the radar sensor horizon by accumulating
LiDAR returns along a pose-chain ahead and behind of the current vehicle
position. Finally, we present the network with multi-channel radar scan inputs
in order to deal with ephemeral and dynamic scene objects.Comment: submitted to IEEE Intelligent Vehicles Symposium (IV) 202
Sense-Assess-eXplain (SAX): Building Trust in Autonomous Vehicles in Challenging Real-World Driving Scenarios
This paper discusses ongoing work in demonstrating research in mobile
autonomy in challenging driving scenarios. In our approach, we address
fundamental technical issues to overcome critical barriers to assurance and
regulation for large-scale deployments of autonomous systems. To this end, we
present how we build robots that (1) can robustly sense and interpret their
environment using traditional as well as unconventional sensors; (2) can assess
their own capabilities; and (3), vitally in the purpose of assurance and trust,
can provide causal explanations of their interpretations and assessments. As it
is essential that robots are safe and trusted, we design, develop, and
demonstrate fundamental technologies in real-world applications to overcome
critical barriers which impede the current deployment of robots in economically
and socially important areas. Finally, we describe ongoing work in the
collection of an unusual, rare, and highly valuable dataset.Comment: accepted for publication at the IEEE Intelligent Vehicles Symposium
(IV), Workshop on Ensuring and Validating Safety for Automated Vehicles
(EVSAV), 2020, project URL:
https://ori.ox.ac.uk/projects/sense-assess-explain-sa
RadarLCD: Learnable Radar-based Loop Closure Detection Pipeline
Loop Closure Detection (LCD) is an essential task in robotics and computer
vision, serving as a fundamental component for various applications across
diverse domains. These applications encompass object recognition, image
retrieval, and video analysis. LCD consists in identifying whether a robot has
returned to a previously visited location, referred to as a loop, and then
estimating the related roto-translation with respect to the analyzed location.
Despite the numerous advantages of radar sensors, such as their ability to
operate under diverse weather conditions and provide a wider range of view
compared to other commonly used sensors (e.g., cameras or LiDARs), integrating
radar data remains an arduous task due to intrinsic noise and distortion. To
address this challenge, this research introduces RadarLCD, a novel supervised
deep learning pipeline specifically designed for Loop Closure Detection using
the FMCW Radar (Frequency Modulated Continuous Wave) sensor. RadarLCD, a
learning-based LCD methodology explicitly designed for radar systems, makes a
significant contribution by leveraging the pre-trained HERO (Hybrid Estimation
Radar Odometry) model. Being originally developed for radar odometry, HERO's
features are used to select key points crucial for LCD tasks. The methodology
undergoes evaluation across a variety of FMCW Radar dataset scenes, and it is
compared to state-of-the-art systems such as Scan Context for Place Recognition
and ICP for Loop Closure. The results demonstrate that RadarLCD surpasses the
alternatives in multiple aspects of Loop Closure Detection.Comment: 7 pages, 2 figure
- …