1,517 research outputs found
Robust Place Recognition using an Imaging Lidar
We propose a methodology for robust, real-time place recognition using an
imaging lidar, which yields image-quality high-resolution 3D point clouds.
Utilizing the intensity readings of an imaging lidar, we project the point
cloud and obtain an intensity image. ORB feature descriptors are extracted from
the image and encoded into a bag-of-words vector. The vector, used to identify
the point cloud, is inserted into a database that is maintained by DBoW for
fast place recognition queries. The returned candidate is further validated by
matching visual feature descriptors. To reject matching outliers, we apply PnP,
which minimizes the reprojection error of visual features' positions in
Euclidean space with their correspondences in 2D image space, using RANSAC.
Combining the advantages from both camera and lidar-based place recognition
approaches, our method is truly rotation-invariant and can tackle reverse
revisiting and upside-down revisiting. The proposed method is evaluated on
datasets gathered from a variety of platforms over different scales and
environments. Our implementation is available at
https://git.io/imaging-lidar-place-recognitionComment: ICRA 202
RGB-D Indoor mapping using deep features
RGB-D indoor mapping has been an active research topic in the last decade with the advance of depth sensors. However, despite the great success of deep learning techniques on various problems, similar approaches for SLAM have not been much addressed yet. In this work, an RGB-D SLAM system using a deep learning approach for mapping indoor environments is proposed. A pre-trained CNN model with multiple random recursive structures is utilized to acquire deep features in an efficient way with no need for training. Deep features present strong representations from color frames and enable better data association. To increase computational efficiency, deep feature vectors are considered as points in a high dimensional space and indexed in a priority search k-means tree. The search precision is improved by employing an adaptive mechanism. For motion estimation, a sparse feature based approach is adopted by employing a robust keypoint detector and descriptor combination. The system is assessed on TUM RGB-D benchmark using the sequences recorded in medium and large sized environments. The experimental results demonstrate the accuracy and robustness of the proposed system over the state-of-the-art, especially in large sequences. © 2019 IEEE
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
- …