931 research outputs found
Monocular navigation for long-term autonomy
We present a reliable and robust monocular navigation system for an autonomous vehicle.
The proposed method is computationally efficient, needs off-the-shelf equipment only and does not require any additional infrastructure like radio beacons or GPS.
Contrary to traditional localization algorithms, which use advanced mathematical methods to determine vehicle position, our method uses a more practical approach.
In our case, an image-feature-based monocular vision technique determines only the heading of the vehicle while the vehicle's odometry is used to estimate the distance traveled.
We present a mathematical proof and experimental evidence indicating that the localization error of a robot guided by this principle is bound.
The experiments demonstrate that the method can cope with variable illumination, lighting deficiency and both short- and long-term environment changes.
This makes the method especially suitable for deployment in scenarios which require long-term autonomous operation
Efficient 2D-3D Matching for Multi-Camera Visual Localization
Visual localization, i.e., determining the position and orientation of a
vehicle with respect to a map, is a key problem in autonomous driving. We
present a multicamera visual inertial localization algorithm for large scale
environments. To efficiently and effectively match features against a pre-built
global 3D map, we propose a prioritized feature matching scheme for
multi-camera systems. In contrast to existing works, designed for monocular
cameras, we (1) tailor the prioritization function to the multi-camera setup
and (2) run feature matching and pose estimation in parallel. This
significantly accelerates the matching and pose estimation stages and allows us
to dynamically adapt the matching efforts based on the surrounding environment.
In addition, we show how pose priors can be integrated into the localization
system to increase efficiency and robustness. Finally, we extend our algorithm
by fusing the absolute pose estimates with motion estimates from a multi-camera
visual inertial odometry pipeline (VIO). This results in a system that provides
reliable and drift-less pose estimation. Extensive experiments show that our
localization runs fast and robust under varying conditions, and that our
extended algorithm enables reliable real-time pose estimation.Comment: 7 pages, 5 figure
Evaluating Visual Odometry Methods for Autonomous Driving in Rain
The increasing demand for autonomous vehicles has created a need for robust
navigation systems that can also operate effectively in adverse weather
conditions. Visual odometry is a technique used in these navigation systems,
enabling the estimation of vehicle position and motion using input from onboard
cameras. However, visual odometry accuracy can be significantly impacted in
challenging weather conditions, such as heavy rain, snow, or fog. In this
paper, we evaluate a range of visual odometry methods, including our DROIDSLAM
based heuristic approach. Specifically, these algorithms are tested on both
clear and rainy weather urban driving data to evaluate their robustness. We
compiled a dataset comprising of a range of rainy weather conditions from
different cities. This includes, the Oxford Robotcar dataset from Oxford, the
4Seasons dataset from Munich and an internal dataset collected in Singapore. We
evaluated different visual odometry algorithms for both monocular and stereo
camera setups using the Absolute Trajectory Error (ATE). Our evaluation
suggests that the Depth and Flow for Visual Odometry (DF-VO) algorithm with
monocular setup worked well for short range distances (< 500m) and our proposed
DROID-SLAM based heuristic approach for the stereo setup performed relatively
well for long-term localization. Both algorithms performed consistently well
across all rain conditions.Comment: 8 pages, 4 figures, Accepted at IEEE International Conference on
Automation Science and Engineering (CASE) 202
RadarSLAM: Radar based Large-Scale SLAM in All Weathers
Numerous Simultaneous Localization and Mapping (SLAM) algorithms have been
presented in last decade using different sensor modalities. However, robust
SLAM in extreme weather conditions is still an open research problem. In this
paper, RadarSLAM, a full radar based graph SLAM system, is proposed for
reliable localization and mapping in large-scale environments. It is composed
of pose tracking, local mapping, loop closure detection and pose graph
optimization, enhanced by novel feature matching and probabilistic point cloud
generation on radar images. Extensive experiments are conducted on a public
radar dataset and several self-collected radar sequences, demonstrating the
state-of-the-art reliability and localization accuracy in various adverse
weather conditions, such as dark night, dense fog and heavy snowfall
Featureless visual processing for SLAM in changing outdoor environments
Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features
Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Visual localization enables autonomous vehicles to navigate in their
surroundings and augmented reality applications to link virtual to real worlds.
Practical visual localization approaches need to be robust to a wide variety of
viewing condition, including day-night changes, as well as weather and seasonal
variations, while providing highly accurate 6 degree-of-freedom (6DOF) camera
pose estimates. In this paper, we introduce the first benchmark datasets
specifically designed for analyzing the impact of such factors on visual
localization. Using carefully created ground truth poses for query images taken
under a wide variety of conditions, we evaluate the impact of various factors
on 6DOF camera pose estimation accuracy through extensive experiments with
state-of-the-art localization approaches. Based on our results, we draw
conclusions about the difficulty of different conditions, showing that
long-term localization is far from solved, and propose promising avenues for
future work, including sequence-based localization approaches and the need for
better local features. Our benchmark is available at visuallocalization.net.Comment: Accepted to CVPR 2018 as a spotligh
- âŠ