1 research outputs found
Domain-invariant Similarity Activation Map Contrastive Learning for Retrieval-based Long-term Visual Localization
Visual localization is a crucial component in the application of mobile robot
and autonomous driving. Image retrieval is an efficient and effective technique
in image-based localization methods. Due to the drastic variability of
environmental conditions, e.g. illumination, seasonal and weather changes,
retrieval-based visual localization is severely affected and becomes a
challenging problem. In this work, a general architecture is first formulated
probabilistically to extract domain invariant feature through multi-domain
image translation. And then a novel gradient-weighted similarity activation
mapping loss (Grad-SAM) is incorporated for finer localization with high
accuracy. We also propose a new adaptive triplet loss to boost the contrastive
learning of the embedding in a self-supervised manner. The final coarse-to-fine
image retrieval pipeline is implemented as the sequential combination of models
without and with Grad-SAM loss. Extensive experiments have been conducted to
validate the effectiveness of the proposed approach on the CMUSeasons dataset.
The strong generalization ability of our approach is verified on RobotCar
dataset using models pre-trained on urban part of CMU-Seasons dataset. Our
performance is on par with or even outperforms the state-of-the-art image-based
localization baselines in medium or high precision, especially under the
challenging environments with illumination variance, vegetation and night-time
images. The code and pretrained models are available on
https://github.com/HanjiangHu/DISAM.Comment: Published in IEEE/CAA Journal of Automatica Sinic