16,717 research outputs found
Appearance-based SLAM in a network space
The task of Simultaneous Localization and Mapping (SLAM) is regularly performed in network spaces consisting of a set of corridors connecting locations in the space. Empirical research has demonstrated that such spaces generally exhibit common structural properties relating to aspects such as corridor length. Consequently there exists potential to improve performance through the placement of priors over these properties. In this work we propose an appearance-based SLAM method which explicitly models the space as a network and in turn uses this model as a platform to place priors over its structure. Relative to existing works, which implicitly assume a network space and place priors over its structure, this approach allows a more formal placement of priors. In order to achieve robustness, the proposed method is implemented within a multi-hypothesis tracking framework. Results achieved on two publicly available datasets demonstrate the proposed method outperforms a current state-of-the-art appearance-based SLAM method
Semi-supervised Vector-Quantization in Visual SLAM using HGCN
In this paper, two semi-supervised appearance based loop closure detection
technique, HGCN-FABMAP and HGCN-BoW are introduced. Furthermore an extension to
the current state of the art localization SLAM algorithm, ORB-SLAM, is
presented. The proposed HGCN-FABMAP method is implemented in an off-line manner
incorporating Bayesian probabilistic schema for loop detection decision making.
Specifically, we let a Hyperbolic Graph Convolutional Neural Network (HGCN) to
operate over the SURF features graph space, and perform vector quantization
part of the SLAM procedure. This part previously was performed in an
unsupervised manner using algorithms like HKmeans, kmeans++,..etc. The main
Advantage of using HGCN, is that it scales linearly in number of graph edges.
Experimental results shows that HGCN-FABMAP algorithm needs far more cluster
centroids than HGCN-ORB, otherwise it fails to detect loop closures. Therefore
we consider HGCN-ORB to be more efficient in terms of memory consumption, also
we conclude the superiority of HGCN-BoW and HGCN-FABMAP with respect to other
algorithms
Training a Convolutional Neural Network for Appearance-Invariant Place Recognition
Place recognition is one of the most challenging problems in computer vision,
and has become a key part in mobile robotics and autonomous driving
applications for performing loop closure in visual SLAM systems. Moreover, the
difficulty of recognizing a revisited location increases with appearance
changes caused, for instance, by weather or illumination variations, which
hinders the long-term application of such algorithms in real environments. In
this paper we present a convolutional neural network (CNN), trained for the
first time with the purpose of recognizing revisited locations under severe
appearance changes, which maps images to a low dimensional space where
Euclidean distances represent place dissimilarity. In order for the network to
learn the desired invariances, we train it with triplets of images selected
from datasets which present a challenging variability in visual appearance. The
triplets are selected in such way that two samples are from the same location
and the third one is taken from a different place. We validate our system
through extensive experimentation, where we demonstrate better performance than
state-of-art algorithms in a number of popular datasets
LookUP: Vision-Only Real-Time Precise Underground Localisation for Autonomous Mining Vehicles
A key capability for autonomous underground mining vehicles is real-time
accurate localisation. While significant progress has been made, currently
deployed systems have several limitations ranging from dependence on costly
additional infrastructure to failure of both visual and range sensor-based
techniques in highly aliased or visually challenging environments. In our
previous work, we presented a lightweight coarse vision-based localisation
system that could map and then localise to within a few metres in an
underground mining environment. However, this level of precision is
insufficient for providing a cheaper, more reliable vision-based automation
alternative to current range sensor-based systems. Here we present a new
precision localisation system dubbed "LookUP", which learns a
neural-network-based pixel sampling strategy for estimating homographies based
on ceiling-facing cameras without requiring any manual labelling. This new
system runs in real time on limited computation resource and is demonstrated on
two different underground mine sites, achieving real time performance at ~5
frames per second and a much improved average localisation error of ~1.2 metre.Comment: 7 pages, 7 figures, accepted for IEEE ICRA 201
- …