14 research outputs found
Exact Point Cloud Downsampling for Fast and Accurate Global Trajectory Optimization
This paper presents a point cloud downsampling algorithm for fast and
accurate trajectory optimization based on global registration error
minimization. The proposed algorithm selects a weighted subset of residuals of
the input point cloud such that the subset yields exactly the same quadratic
point cloud registration error function as that of the original point cloud at
the evaluation point. This method accurately approximates the original
registration error function with only a small subset of input points (29
residuals at a minimum). Experimental results using the KITTI dataset
demonstrate that the proposed algorithm significantly reduces processing time
(by 87\%) and memory consumption (by 99\%) for global registration error
minimization while retaining accuracy.Comment: IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS2023
Single-Shot Global Localization via Graph-Theoretic Correspondence Matching
This paper describes a method of global localization based on graph-theoretic
association of instances between a query and the prior map. The proposed
framework employs correspondence matching based on the maximum clique problem
(MCP). The framework is potentially applicable to other map and/or query
modalities thanks to the graph-based abstraction of the problem, while many of
existing global localization methods rely on a query and the dataset in the
same modality. We implement it with a semantically labeled 3D point cloud map,
and a semantic segmentation image as a query. Leveraging the graph-theoretic
framework, the proposed method realizes global localization exploiting only the
map and the query. The method shows promising results on multiple large-scale
simulated maps of urban scenes
3D-BBS: Global Localization for 3D Point Cloud Scan Matching Using Branch-and-Bound Algorithm
This paper presents an accurate and fast 3D global localization method,
3D-BBS, that extends the existing branch-and-bound (BnB)-based 2D scan matching
(BBS) algorithm. To reduce memory consumption, we utilize a sparse hash table
for storing hierarchical 3D voxel maps. To improve the processing cost of BBS
in 3D space, we propose an efficient roto-translational space branching.
Furthermore, we devise a batched BnB algorithm to fully leverage GPU parallel
processing. Through experiments in simulated and real environments, we
demonstrated that the 3D-BBS enabled accurate global localization with only a
3D LiDAR scan roughly aligned in the gravity direction and a 3D pre-built map.
This method required only 878 msec on average to perform global localization
and outperformed state-of-the-art global registration methods in terms of
accuracy and processing speed.Comment: IEEE International Conference on Robotics and Automation (ICRA2024
Shape Recovery of 3D Data Obtained from a Moving Range Sensor by Using Image Sequences
For a large object, scanning from the air is one of the most efficient methods of obtaining 3D data. But in the case of large cultural heritage objects, there are some difficulties in scanning with respect to safety and efficiency. To remedy these problems, we have been developing a novel 3D measurement system, the Floating Laser Range Sensor (FLRS), in which a range sensor is suspended beneath a balloon. The obtained data, however, have some distortion due to movement during the scanning process. In this paper, we propose a method to recover 3D range data obtained by a moving laser range sensor. This method is applicable not only to our FLRS, but also to a general moving range sensor. Using image sequences from a video camera mounted on the FLRS enables us to estimate the motion of the FLRS without any physical sensors such as gyros or GPS. In the first stage, the initial values of camera motion parameters are estimated by full-perspective factorization. The next stage refines camera motion parameters using the relationships between camera images and range data distortion. Finally, by using the refined parameters, the distorted range data are recovered. In addition, our method is applicable with an uncalibrated video camera and range sensor system. We applied this method to an actual scanning project, and the results showed the effectiveness of our method
Motion Estimation of a Moving Range Sensor by Image Sequences and Distorted Range Data
For a large scale object, scanning from the air is one of the most efficient methods of obtaining 3D data. In the case of large cultural heritage objects, there are some difficulties in scanning them with respect to safety and efficiency. To remedy these problems, we have been developing a novel 3D measurement system, the Floating Laser Range Sensor (FLRS) , in which a rage sensor is suspended beneath a balloon. The obtained data, however, have some distortion due to the intrascanning movement. In this paper, we propose a method to recover 3D range data obtained by a moving laser range sensor; this method is applicable not only to our FLRS, but also to a general moving range sensor. Using image sequences from a video camera mounted on the FLRS enables us to estimate the motion of the FLRS without any physical sensors such as gyros and GPS. At first, the initial values of camera motion parameters are estimated by perspective factorization. The next stage refines camera motion parameters using the relationships between camera images and the range data distortion. Finally, by using the refined parameter, the distorted range data are recovered. We applied this method to an actual scanning project and the results showed the effectiveness of our method