17 research outputs found
Satellite Image Based Cross-view Localization for Autonomous Vehicle
Existing spatial localization techniques for autonomous vehicles mostly use a
pre-built 3D-HD map, often constructed using a survey-grade 3D mapping vehicle,
which is not only expensive but also laborious. This paper shows that by using
an off-the-shelf high-definition satellite image as a ready-to-use map, we are
able to achieve cross-view vehicle localization up to a satisfactory accuracy,
providing a cheaper and more practical way for localization. While the
utilization of satellite imagery for cross-view localization is an established
concept, the conventional methodology focuses primarily on image retrieval.
This paper introduces a novel approach to cross-view localization that departs
from the conventional image retrieval method. Specifically, our method develops
(1) a Geometric-align Feature Extractor (GaFE) that leverages measured 3D
points to bridge the geometric gap between ground and overhead views, (2) a
Pose Aware Branch (PAB) adopting a triplet loss to encourage pose-aware feature
extraction, and (3) a Recursive Pose Refine Branch (RPRB) using the
Levenberg-Marquardt (LM) algorithm to align the initial pose towards the true
vehicle pose iteratively. Our method is validated on KITTI and Ford Multi-AV
Seasonal datasets as ground view and Google Maps as the satellite view. The
results demonstrate the superiority of our method in cross-view localization
with median spatial and angular errors within meter and ,
respectively.Comment: Accepted by ICRA202
Cross-View Visual Geo-Localization for Outdoor Augmented Reality
Precise estimation of global orientation and location is critical to ensure a
compelling outdoor Augmented Reality (AR) experience. We address the problem of
geo-pose estimation by cross-view matching of query ground images to a
geo-referenced aerial satellite image database. Recently, neural network-based
methods have shown state-of-the-art performance in cross-view matching.
However, most of the prior works focus only on location estimation, ignoring
orientation, which cannot meet the requirements in outdoor AR applications. We
propose a new transformer neural network-based model and a modified triplet
ranking loss for joint location and orientation estimation. Experiments on
several benchmark cross-view geo-localization datasets show that our model
achieves state-of-the-art performance. Furthermore, we present an approach to
extend the single image query-based geo-localization approach by utilizing
temporal information from a navigation pipeline for robust continuous
geo-localization. Experimentation on several large-scale real-world video
sequences demonstrates that our approach enables high-precision and stable AR
insertion.Comment: IEEE VR 202
Beyond Geo-localization: Fine-grained Orientation of Street-view Images by Cross-view Matching with Satellite Imagery
Street-view imagery provides us with novel experiences to explore different
places remotely. Carefully calibrated street-view images (e.g. Google Street
View) can be used for different downstream tasks, e.g. navigation, map features
extraction. As personal high-quality cameras have become much more affordable
and portable, an enormous amount of crowdsourced street-view images are
uploaded to the internet, but commonly with missing or noisy sensor
information. To prepare this hidden treasure for "ready-to-use" status,
determining missing location information and camera orientation angles are two
equally important tasks. Recent methods have achieved high performance on
geo-localization of street-view images by cross-view matching with a pool of
geo-referenced satellite imagery. However, most of the existing works focus
more on geo-localization than estimating the image orientation. In this work,
we re-state the importance of finding fine-grained orientation for street-view
images, formally define the problem and provide a set of evaluation metrics to
assess the quality of the orientation estimation. We propose two methods to
improve the granularity of the orientation estimation, achieving 82.4% and
72.3% accuracy for images with estimated angle errors below 2 degrees for CVUSA
and CVACT datasets, corresponding to 34.9% and 28.2% absolute improvement
compared to previous works. Integrating fine-grained orientation estimation in
training also improves the performance on geo-localization, giving top 1 recall
95.5%/85.5% and 86.8%/80.4% for orientation known/unknown tests on the two
datasets.Comment: This paper has been accepted by ACM Multimedia 2022. The version
contains additional supplementary material
Wide-Area Geolocalization with a Limited Field of View Camera in Challenging Urban Environments
Cross-view geolocalization, a supplement or replacement for GPS, localizes an
agent within a search area by matching ground-view images to overhead images.
Significant progress has been made assuming a panoramic ground camera.
Panoramic cameras' high complexity and cost make non-panoramic cameras more
widely applicable, but also more challenging since they yield less scene
overlap between ground and overhead images. This paper presents Restricted FOV
Wide-Area Geolocalization (ReWAG), a cross-view geolocalization approach that
combines a neural network and particle filter to globally localize a mobile
agent with only odometry and a non-panoramic camera. ReWAG creates pose-aware
embeddings and provides a strategy to incorporate particle pose into the
Siamese network, improving localization accuracy by a factor of 100 compared to
a vision transformer baseline. This extended work also presents ReWAG*, which
improves upon ReWAG's generalization ability in previously unseen environments.
ReWAG* repeatedly converges accurately on a dataset of images we have collected
in Boston with a 72 degree field of view (FOV) camera, a location and FOV that
ReWAG* was not trained on.Comment: 10 pages, 16 figures. Extension of ICRA 2023 paper arXiv:2209.1185