5,798 research outputs found
A minimalistic approach to appearance-based visual SLAM
This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a
relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple
view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the “flat floor assumption” to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses
CNN for IMU Assisted Odometry Estimation using Velodyne LiDAR
We introduce a novel method for odometry estimation using convolutional
neural networks from 3D LiDAR scans. The original sparse data are encoded into
2D matrices for the training of proposed networks and for the prediction. Our
networks show significantly better precision in the estimation of translational
motion parameters comparing with state of the art method LOAM, while achieving
real-time performance. Together with IMU support, high quality odometry
estimation and LiDAR data registration is realized. Moreover, we propose
alternative CNNs trained for the prediction of rotational motion parameters
while achieving results also comparable with state of the art. The proposed
method can replace wheel encoders in odometry estimation or supplement missing
GPS data, when the GNSS signal absents (e.g. during the indoor mapping). Our
solution brings real-time performance and precision which are useful to provide
online preview of the mapping results and verification of the map completeness
in real time
Visual SLAM for flying vehicles
The ability to learn a map of the environment is important for numerous types of robotic vehicles. In this paper, we address the problem of learning a visual map of the ground using flying vehicles. We assume that the vehicles are equipped with one or two low-cost downlooking cameras in combination with an attitude sensor. Our approach is able to construct a visual map that can later on be used for navigation. Key advantages of our approach are that it is comparably easy to implement, can robustly deal with noisy camera images, and can operate either with a monocular camera or a stereo camera system. Our technique uses visual features and estimates the correspondences between features using a variant of the progressive sample consensus (PROSAC) algorithm. This allows our approach to extract spatial constraints between camera poses that can then be used to address the simultaneous localization and mapping (SLAM) problem by applying graph methods. Furthermore, we address the problem of efficiently identifying loop closures. We performed several experiments with flying vehicles that demonstrate that our method is able to construct maps of large outdoor and indoor environments. © 2008 IEEE
Finding Your Way Back: Comparing Path Odometry Algorithms for Assisted Return.
We present a comparative analysis of inertial-based odometry algorithms for the purpose of assisted return. An assisted return system facilitates backtracking of a path previously taken, and can be particularly useful for blind pedestrians. We present a new algorithm for path matching, and test it in simulated assisted return tasks with data from WeAllWalk, the only existing data set with inertial data recorded from blind walkers. We consider two odometry systems, one based on deep learning (RoNIN), and the second based on robust turn detection and step counting. Our results show that the best path matching results are obtained using the turns/steps odometry system
Network Uncertainty Informed Semantic Feature Selection for Visual SLAM
In order to facilitate long-term localization using a visual simultaneous
localization and mapping (SLAM) algorithm, careful feature selection can help
ensure that reference points persist over long durations and the runtime and
storage complexity of the algorithm remain consistent. We present SIVO
(Semantically Informed Visual Odometry and Mapping), a novel
information-theoretic feature selection method for visual SLAM which
incorporates semantic segmentation and neural network uncertainty into the
feature selection pipeline. Our algorithm selects points which provide the
highest reduction in Shannon entropy between the entropy of the current state
and the joint entropy of the state, given the addition of the new feature with
the classification entropy of the feature from a Bayesian neural network. Each
selected feature significantly reduces the uncertainty of the vehicle state and
has been detected to be a static object (building, traffic sign, etc.)
repeatedly with a high confidence. This selection strategy generates a sparse
map which can facilitate long-term localization. The KITTI odometry dataset is
used to evaluate our method, and we also compare our results against ORB_SLAM2.
Overall, SIVO performs comparably to the baseline method while reducing the map
size by almost 70%.Comment: Published in: 2019 16th Conference on Computer and Robot Vision (CRV
- …