9,159 research outputs found
Observability analysis and optimal sensor placement in stereo radar odometry
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Localization is the key perceptual process closing the loop of autonomous navigation, allowing self-driving vehicles to operate in a deliberate way. To ensure robust localization, autonomous vehicles have to implement redundant estimation processes, ideally independent in terms of the underlying physics behind sensing principles. This paper presents a stereo radar odometry system, which can be used as such a redundant system, complementary to other odometry estimation processes, providing robustness for long-term operability. The presented work is novel with respect to previously published methods in that it contains: (i) a detailed formulation of the Doppler error and its associated uncertainty; (ii) an observability analysis that gives the minimal conditions to infer a 2D twist from radar readings; and (iii) a numerical analysis for optimal vehicle sensor placement. Experimental results are also detailed that validate the theoretical insights.Peer ReviewedPostprint (author's final draft
Satellite Navigation for the Age of Autonomy
Global Navigation Satellite Systems (GNSS) brought navigation to the masses.
Coupled with smartphones, the blue dot in the palm of our hands has forever
changed the way we interact with the world. Looking forward, cyber-physical
systems such as self-driving cars and aerial mobility are pushing the limits of
what localization technologies including GNSS can provide. This autonomous
revolution requires a solution that supports safety-critical operation,
centimeter positioning, and cyber-security for millions of users. To meet these
demands, we propose a navigation service from Low Earth Orbiting (LEO)
satellites which deliver precision in-part through faster motion, higher power
signals for added robustness to interference, constellation autonomous
integrity monitoring for integrity, and encryption / authentication for
resistance to spoofing attacks. This paradigm is enabled by the 'New Space'
movement, where highly capable satellites and components are now built on
assembly lines and launch costs have decreased by more than tenfold. Such a
ubiquitous positioning service enables a consistent and secure standard where
trustworthy information can be validated and shared, extending the electronic
horizon from sensor line of sight to an entire city. This enables the
situational awareness needed for true safe operation to support autonomy at
scale.Comment: 11 pages, 8 figures, 2020 IEEE/ION Position, Location and Navigation
Symposium (PLANS
Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Visual localization enables autonomous vehicles to navigate in their
surroundings and augmented reality applications to link virtual to real worlds.
Practical visual localization approaches need to be robust to a wide variety of
viewing condition, including day-night changes, as well as weather and seasonal
variations, while providing highly accurate 6 degree-of-freedom (6DOF) camera
pose estimates. In this paper, we introduce the first benchmark datasets
specifically designed for analyzing the impact of such factors on visual
localization. Using carefully created ground truth poses for query images taken
under a wide variety of conditions, we evaluate the impact of various factors
on 6DOF camera pose estimation accuracy through extensive experiments with
state-of-the-art localization approaches. Based on our results, we draw
conclusions about the difficulty of different conditions, showing that
long-term localization is far from solved, and propose promising avenues for
future work, including sequence-based localization approaches and the need for
better local features. Our benchmark is available at visuallocalization.net.Comment: Accepted to CVPR 2018 as a spotligh
Understanding the Limitations of CNN-based Absolute Camera Pose Regression
Visual localization is the task of accurate camera pose estimation in a known
scene. It is a key problem in computer vision and robotics, with applications
including self-driving cars, Structure-from-Motion, SLAM, and Mixed Reality.
Traditionally, the localization problem has been tackled using 3D geometry.
Recently, end-to-end approaches based on convolutional neural networks have
become popular. These methods learn to directly regress the camera pose from an
input image. However, they do not achieve the same level of pose accuracy as 3D
structure-based methods. To understand this behavior, we develop a theoretical
model for camera pose regression. We use our model to predict failure cases for
pose regression techniques and verify our predictions through experiments. We
furthermore use our model to show that pose regression is more closely related
to pose approximation via image retrieval than to accurate pose estimation via
3D structure. A key result is that current approaches do not consistently
outperform a handcrafted image retrieval baseline. This clearly shows that
additional research is needed before pose regression algorithms are ready to
compete with structure-based methods.Comment: Initial version of a paper accepted to CVPR 201
3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection
Cameras are a crucial exteroceptive sensor for self-driving cars as they are
low-cost and small, provide appearance information about the environment, and
work in various weather conditions. They can be used for multiple purposes such
as visual navigation and obstacle detection. We can use a surround multi-camera
system to cover the full 360-degree field-of-view around the car. In this way,
we avoid blind spots which can otherwise lead to accidents. To minimize the
number of cameras needed for surround perception, we utilize fisheye cameras.
Consequently, standard vision pipelines for 3D mapping, visual localization,
obstacle detection, etc. need to be adapted to take full advantage of the
availability of multiple cameras rather than treat each camera individually. In
addition, processing of fisheye images has to be supported. In this paper, we
describe the camera calibration and subsequent processing pipeline for
multi-fisheye-camera systems developed as part of the V-Charge project. This
project seeks to enable automated valet parking for self-driving cars. Our
pipeline is able to precisely calibrate multi-camera systems, build sparse 3D
maps for visual navigation, visually localize the car with respect to these
maps, generate accurate dense maps, as well as detect obstacles based on
real-time depth map extraction
- …