237 research outputs found
Adaptive Multi-sensor Perception for Driving Automation in Outdoor Contexts
In this research, adaptive perception for driving automation is discussed so as to enable a vehicle to automatically detect driveable areas and obstacles in the scene. It is especially designed for outdoor contexts where conventional perception systems that rely on a priori knowledge of the terrain's geometric properties, appearance properties, or both, is prone to fail, due to the variability in the terrain properties and environmental conditions. In contrast, the proposed framework uses a self-learning approach to build a model of the ground class that is continuously adjusted online to reflect the latest ground appearance. The system also features high flexibility, as it can work using a single sensor modality or a multi-sensor combination. In the context of this research, different embodiments have been demonstrated using range data coming from either a radar or a stereo camera, and adopting self-supervised strategies where monocular vision is automatically trained by radar or stereo vision. A comprehensive set of experimental results, obtained with different ground vehicles operating in the field, are presented to validate and assess the performance of the system
Learning-on-the-Drive: Self-supervised Adaptation of Visual Offroad Traversability Models
Autonomous off-road driving requires understanding traversability, which
refers to the suitability of a given terrain to drive over. When offroad
vehicles travel at high speed (), they need to reason at long-range
(-) for safe and deliberate navigation. Moreover, vehicles often
operate in new environments and under different weather conditions. LiDAR
provides accurate estimates robust to visual appearances, however, it is often
too noisy beyond 30m for fine-grained estimates due to sparse measurements.
Conversely, visual-based models give dense predictions at further distances but
perform poorly at all ranges when out of training distribution. To address
these challenges, we present ALTER, an offroad perception module that
adapts-on-the-drive to combine the best of both sensors. Our visual model
continuously learns from new near-range LiDAR measurements. This
self-supervised approach enables accurate long-range traversability prediction
in novel environments without hand-labeling. Results on two distinct real-world
offroad environments show up to 52.5% improvement in traversability estimation
over LiDAR-only estimates and 38.1% improvement over non-adaptive visual
baseline.Comment: 8 page
Visual road following using intrinsic images
We present a real-time visual-based road following method for mobile robots in outdoor environments. The approach combines an image processing method, that allows to retrieve illumination invariant images, with an efficient path following algorithm. The method allows a mobile robot to autonomously navigate along pathways of different types in adverse lighting conditions using monocular vision
Learning Off-Road Terrain Traversability with Self-Supervisions Only
Estimating the traversability of terrain should be reliable and accurate in
diverse conditions for autonomous driving in off-road environments. However,
learning-based approaches often yield unreliable results when confronted with
unfamiliar contexts, and it is challenging to obtain manual annotations
frequently for new circumstances. In this paper, we introduce a method for
learning traversability from images that utilizes only self-supervision and no
manual labels, enabling it to easily learn traversability in new circumstances.
To this end, we first generate self-supervised traversability labels from past
driving trajectories by labeling regions traversed by the vehicle as highly
traversable. Using the self-supervised labels, we then train a neural network
that identifies terrains that are safe to traverse from an image using a
one-class classification algorithm. Additionally, we supplement the limitations
of self-supervised labels by incorporating methods of self-supervised learning
of visual representations. To conduct a comprehensive evaluation, we collect
data in a variety of driving environments and perceptual conditions and show
that our method produces reliable estimations in various environments. In
addition, the experimental results validate that our method outperforms other
self-supervised traversability estimation methods and achieves comparable
performances with supervised learning methods trained on manually labeled data.Comment: Accepted to IEEE Robotics and Automation Letters. Our video can be
found at https://bit.ly/3YdKan
Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping
This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground
- ā¦