45 research outputs found

    Point Cloud Registration for LiDAR and Photogrammetric Data: a Critical Synthesis and Performance Analysis on Classic and Deep Learning Algorithms

    Full text link
    Recent advances in computer vision and deep learning have shown promising performance in estimating rigid/similarity transformation between unregistered point clouds of complex objects and scenes. However, their performances are mostly evaluated using a limited number of datasets from a single sensor (e.g. Kinect or RealSense cameras), lacking a comprehensive overview of their applicability in photogrammetric 3D mapping scenarios. In this work, we provide a comprehensive review of the state-of-the-art (SOTA) point cloud registration methods, where we analyze and evaluate these methods using a diverse set of point cloud data from indoor to satellite sources. The quantitative analysis allows for exploring the strengths, applicability, challenges, and future trends of these methods. In contrast to existing analysis works that introduce point cloud registration as a holistic process, our experimental analysis is based on its inherent two-step process to better comprehend these approaches including feature/keypoint-based initial coarse registration and dense fine registration through cloud-to-cloud (C2C) optimization. More than ten methods, including classic hand-crafted, deep-learning-based feature correspondence, and robust C2C methods were tested. We observed that the success rate of most of the algorithms are fewer than 40% over the datasets we tested and there are still are large margin of improvement upon existing algorithms concerning 3D sparse corresopondence search, and the ability to register point clouds with complex geometry and occlusions. With the evaluated statistics on three datasets, we conclude the best-performing methods for each step and provide our recommendations, and outlook future efforts.Comment: 7 figure

    Super edge 4-points congruent sets-based point cloud global registration

    Get PDF
    With the acceleration in three-dimensional (3D) high-frame-rate sensing technologies, dense point clouds collected from multiple standpoints pose a great challenge for the accuracy and efficiency of registration. The combination of coarse registration and fine registration has been extensively promoted. Unlike the requirement of small movements between scan pairs in fine registration, coarse registration can match scans with arbitrary initial poses. The state-of-the-art coarse methods, Super 4-Points Congruent Sets algorithm based on the 4-Points Congruent Sets, improves the speed of registration to a linear order via smart indexing. However, the lack of reduction in the scale of original point clouds limits the application. Besides, the coplanarity of registration bases prevents further reduction of search space. This paper proposes a novel registration method called the Super Edge 4-Points Congruent Sets to address the above problems. The proposed algorithm follows a three-step procedure, including boundary segmentation, overlapping regions extraction, and bases selection. Firstly, an improved method based on vector angle is used to segment the original point clouds aiming to thin out the scale of the initial point clouds. Furthermore, overlapping regions extraction is executed to find out the overlapping regions on the contour. Finally, the proposed method selects registration bases conforming to the distance constraints from the candidate set without consideration about coplanarity. Experiments on various datasets with different characteristics have demonstrated that the average time complexity of the proposed algorithm is improved by 89.76%, and the accuracy is improved by 5 mm on average than the Super 4-Points Congruent Sets algorithm. More encouragingly, the experimental results show that the proposed algorithm can be applied to various restrictive cases, such as few overlapping regions and massive noise. Therefore, the algorithm proposed in this paper is a faster and more robust method than Super 4-Points Congruent Sets under the guarantee of the promised quality.</jats:p

    V4PCS: Volumetric 4PCS Algorithm for Global Registration

    Get PDF
    With the advances in three-dimensional (3D) scanning and sensing technologies, massive human-related data are now available and create many applications in data-driven design. Similarity identification is one of basic problems in data-driven design and can facilitate many engineering applications and product paradigm such as quality control and mass customization. Therefore, reusing information can create unprecedented opportunities in advancing the theory, method, and practice of product design. To enable information reuse, different models have to be aligned so that their similarity can be identified. This alignment is commonly known as the global registration that finds an optimal rigid transformation to align two 3D shapes (scene and model) without any assumptions on their initial positions. The Super 4-Points Congruent Sets (S4PCS) is a popular algorithm used for this shape registration. While S4PCS performs the registration using a set of 4 coplanar points, we find that incorporating the volumetric information of the models can improve the robustness and the efficiency of the algorithm, which are particularly important for mass customization. In this paper, we propose a novel algorithm, Volumetric 4PCS (V4PCS), to extend the 4 coplanar points to non-coplanar ones for global registration, and theoretically demonstrate the computational complexity is significantly reduced. Experimental tests are conducted on a number of models such as tooth aligner and hearing aid to compare with S4PCS. The experimental results show that the proposed V4PCS can achieve a maximum of 20 times speedup and can successfully compute the valid transformation with very limited number of sample points. An application of the proposed method in mass customization is also investigated

    Consistent Two-Flow Network for Tele-Registration of Point Clouds

    Get PDF
    Rigid registration of partial observations is a fundamental problem in various applied fields. In computer graphics, special attention has been given to the registration between two partial point clouds generated by scanning devices. State-of-the-art registration techniques still struggle when the overlap region between the two point clouds is small, and completely fail if there is no overlap between the scan pairs. In this paper, we present a learning-based technique that alleviates this problem, and allows registration between point clouds, presented in arbitrary poses, and having little or even no overlap, a setting that has been referred to as tele-registration. Our technique is based on a novel neural network design that learns a prior of a class of shapes and can complete a partial shape. The key idea is combining the registration and completion tasks in a way that reinforces each other. In particular, we simultaneously train the registration network and completion network using two coupled flows, one that register-and-complete, and one that complete-and-register, and encourage the two flows to produce a consistent result. We show that, compared with each separate flow, this two-flow training leads to robust and reliable tele-registration, and hence to a better point cloud prediction that completes the registered scans. It is also worth mentioning that each of the components in our neural network outperforms state-of-the-art methods in both completion and registration. We further analyze our network with several ablation studies and demonstrate its performance on a large number of partial point clouds, both synthetic and real-world, that have only small or no overlap.Comment: Accepted to TVCG 2021, project page at https://vcc.tech/research/2021/CTFNe
    corecore