18 research outputs found
Localization from semantic observations via the matrix permanent
Most approaches to robot localization rely on low-level geometric features such as points, lines, and planes. In this paper, we use object recognition to obtain semantic information from the robot’s sensors and consider the task of localizing the robot within a prior map of landmarks, which are annotated with semantic labels. As object recognition algorithms miss detections and produce false alarms, correct data association between the detections and the landmarks on the map is central to the semantic localization problem. Instead of the traditional vector-based representation, we propose a sensor model, which encodes the semantic observations via random finite sets and enables a unified treatment of missed detections, false alarms, and data association. Our second contribution is to reduce the problem of computing the likelihood of a set-valued observation to the problem of computing a matrix permanent. It is this crucial transformation that allows us to solve the semantic localization problem with a polynomial-time approximation to the set-based Bayes filter. Finally, we address the active semantic localization problem, in which the observer’s trajectory is planned in order to improve the accuracy and efficiency of the localization process. The performance of our approach is demonstrated in simulation and in real environments using deformable-part-model-based object detectors. Robust global localization from semantic observations is demonstrated for a mobile robot, for the Project Tango phone, and on the KITTI visual odometry dataset. Comparisons are made with the traditional lidar-based geometric Monte Carlo localization
A Hierarchical Dual Model of Environment- and Place-Specific Utility for Visual Place Recognition
Visual Place Recognition (VPR) approaches have typically attempted to match
places by identifying visual cues, image regions or landmarks that have high
``utility'' in identifying a specific place. But this concept of utility is not
singular - rather it can take a range of forms. In this paper, we present a
novel approach to deduce two key types of utility for VPR: the utility of
visual cues `specific' to an environment, and to a particular place. We employ
contrastive learning principles to estimate both the environment- and
place-specific utility of Vector of Locally Aggregated Descriptors (VLAD)
clusters in an unsupervised manner, which is then used to guide local feature
matching through keypoint selection. By combining these two utility measures,
our approach achieves state-of-the-art performance on three challenging
benchmark datasets, while simultaneously reducing the required storage and
compute time. We provide further analysis demonstrating that unsupervised
cluster selection results in semantically meaningful results, that finer
grained categorization often has higher utility for VPR than high level
semantic categorization (e.g. building, road), and characterise how these two
utility measures vary across different places and environments. Source code is
made publicly available at https://github.com/Nik-V9/HEAPUtil.Comment: Accepted to IEEE Robotics and Automation Letters (RA-L) and IROS 202
Beyond Controlled Environments: 3D Camera Re-Localization in Changing Indoor Scenes
Long-term camera re-localization is an important task with numerous computer
vision and robotics applications. Whilst various outdoor benchmarks exist that
target lighting, weather and seasonal changes, far less attention has been paid
to appearance changes that occur indoors. This has led to a mismatch between
popular indoor benchmarks, which focus on static scenes, and indoor
environments that are of interest for many real-world applications. In this
paper, we adapt 3RScan - a recently introduced indoor RGB-D dataset designed
for object instance re-localization - to create RIO10, a new long-term camera
re-localization benchmark focused on indoor scenes. We propose new metrics for
evaluating camera re-localization and explore how state-of-the-art camera
re-localizers perform according to these metrics. We also examine in detail how
different types of scene change affect the performance of different methods,
based on novel ways of detecting such changes in a given RGB-D frame. Our
results clearly show that long-term indoor re-localization is an unsolved
problem. Our benchmark and tools are publicly available at
waldjohannau.github.io/RIO10Comment: ECCV 2020, project website https://waldjohannau.github.io/RIO1