27 research outputs found

    Deep regression for LiDAR-based localization in dense urban areas

    No full text
    LiDAR-based localization in a city-scale map is a fundamental question in autonomous driving research. As a reasonable localization scheme, the localization can be performed by global retrieval (that suggests potential candidates from the database) followed by geometric registration (that obtains an accurate relative pose). In this work, we develop a novel end-to-end, deep multi-task network that simultaneously performs global retrieval and geometric registration for LiDAR-based localization. Both retrieval and registration are formulated and solved as regression problems, and they can be deployed independently during inference time. We also design two mechanisms to enhance our multi-task regression network\u27s performance: residual connections for point clouds and a new loss function with learnable parameters. To alleviate the common phenomenon of vanishing gradients in neural networks, we employ residual connections to support constructing a deeper network effectively. At the same time, to solve the problem of huge differences in scale and units between different tasks, we propose a loss function that can automatically balance multi-tasks. Experiments on two public benchmarks validate the state-of-the-art performance of our algorithm in large-scale LiDAR-based localization

    Filtering airborne LiDAR data based on multi-view window and multi-resolution hierarchical cloth simulation

    No full text
    Ground filtering is a fundamental step in airborne LiDAR data processing toward a variety of applications. However, existing algorithms remain tremendously challenging in complex environments, e.g. steep hillsides, ridges, valleys, discontinuities, and numerous objects. We presented a new ground filtering algorithm that can handle various landscapes. First, the multi-view window is developed to increase the number of ground seeds on the various terrains. Second, multi-resolution hierarchical cloth simulation is used to rapidly construct the high-resolution reference terrain, and bidirectional internal force operation is proposed to improve the accuracy of reference terrain by smoothing the spikes in cloth. Finally, ground and non-ground points are classified based on the height differences between points and the reference terrain. The proposed algorithm was validated not only in the International Society for Photogrammetry and Remote Sensing (ISPRS) but also karst datasets, where particularly complex environments is contained. Results showed that the proposed algorithm outperformed the existing algorithms, with the lowest average total error of 3.85% and the highest average kappa coefficient of 87.75%. Moreover, the proposed algorithm can completely preserve complex terrain, e.g. extremely steep hillsides, and sharp ridges. This study had great potential to provide a useful tool for LiDAR data processing

    Review: Deep Learning on 3D Point Clouds

    No full text
    A point cloud is a set of points defined in a 3D metric space. Point clouds have become one of the most significant data formats for 3D representation and are gaining increased popularity as a result of the increased availability of acquisition devices, as well as seeing increased application in areas such as robotics, autonomous driving, and augmented and virtual reality. Deep learning is now the most powerful tool for data processing in computer vision and is becoming the most preferred technique for tasks such as classification, segmentation, and detection. While deep learning techniques are mainly applied to data with a structured grid, the point cloud, on the other hand, is unstructured. The unstructuredness of point clouds makes the use of deep learning for its direct processing very challenging. This paper contains a review of the recent state-of-the-art deep learning techniques, mainly focusing on raw point cloud data. The initial work on deep learning directly with raw point cloud data did not model local regions; therefore, subsequent approaches model local regions through sampling and grouping. More recently, several approaches have been proposed that not only model the local regions but also explore the correlation between points in the local regions. From the survey, we conclude that approaches that model local regions and take into account the correlation between points in the local regions perform better. Contrary to existing reviews, this paper provides a general structure for learning with raw point clouds, and various methods were compared based on the general structure. This work also introduces the popular 3D point cloud benchmark datasets and discusses the application of deep learning in popular 3D vision tasks, including classification, segmentation, and detection

    Improving the estimation of canopy cover from UAV-LiDAR data using a pit-free CHM-based method

    No full text
    Accurate and rapid estimation of canopy cover (CC) is crucial for many ecological and environmental models and for forest management. Unmanned aerial vehicle-light detecting and ranging (UAV-LiDAR) systems represent a promising tool for CC estimation due to their high mobility, low cost, and high point density. However, the CC values from UAV-LiDAR point clouds may be underestimated due to the presence of large quantities of within-crown gaps. To alleviate the negative effects of within-crown gaps, we proposed a pit-free CHM-based method for estimating CC, in which a cloth simulation method was used to fill the within-crown gaps. To evaluate the effect of CC values and within-crown gap proportions on the proposed method, the performance of the proposed method was tested on 18 samples with different CC values (40−70%) and 6 samples with different within-crown gap proportions (10−60%). The results showed that the CC accuracy of the proposed method was higher than that of the method without filling within-crown gaps (R2 = 0.99 vs 0.98; RMSE = 1.49% vs 2.2%). The proposed method was insensitive to within-crown gap proportions, although the CC accuracy decreased slightly with the increase in within-crown gap proportions

    Forming pressure effect on microstructure and mechanical properties of nanocrystalline aluminum synthesized by inert gas condensation

    No full text
    The preparation of nanocrystalline aluminum (NC Al) was conducted in two steps. After the NC Al powder was synthesized by an Inert gas condensation (IGC) method in a helium atmosphere of 500 Pa, the NC Al powder was in-situ compacted into a pellet with a 10 mm diameter and 250 μm-300 μm thickness in a high vacuum (10-6 Pa-10-7 Pa) at room temperature. The NC Al samples were not exposed to air during the entire process. After the pressure reached 6 GPa, the relative density could reach 99.83%. The results showed that the grain size decreased with the increased of in-situ forming pressure. The NC Al samples present obvious ductile fracture, and the tensile properties were greatly changed with the increase of forming pressure

    Forming pressure effect on microstructure and mechanical properties of nanocrystalline aluminum synthesized by inert gas condensation

    No full text
    The preparation of nanocrystalline aluminum (NC Al) was conducted in two steps. After the NC Al powder was synthesized by an Inert gas condensation (IGC) method in a helium atmosphere of 500 Pa, the NC Al powder was in-situ compacted into a pellet with a 10 mm diameter and 250 μm-300 μm thickness in a high vacuum (10-6 Pa-10-7 Pa) at room temperature. The NC Al samples were not exposed to air during the entire process. After the pressure reached 6 GPa, the relative density could reach 99.83%. The results showed that the grain size decreased with the increased of in-situ forming pressure. The NC Al samples present obvious ductile fracture, and the tensile properties were greatly changed with the increase of forming pressure

    A General and Effective Method for Wall and Protrusion Separation from Facade Point Clouds

    No full text
    As a critical prerequisite for semantic facade reconstruction, accurately separating wall and protrusion points from facade point clouds is required. The performance of traditional separation methods is severely limited by facade conditions, including wall shapes (e.g., nonplanar walls), wall compositions (e.g., walls composed of multiple noncoplanar point clusters), and protrusion structures (e.g., protrusions without regularity, repetitive, or self-symmetric features). This study proposes a more widely applicable wall and protrusion separation method. The major principle underlying the proposed method is to transform the wall and protrusion separation problem as a ground filtering problem and to separate walls and protrusions using ground filtering methods, since the 2 problems can be solved using the same prior knowledge, that is, protrusions (nonground objects) protrude from walls (ground). After transformation problem, cloth simulation filter was used as an example to separate walls and protrusions in 8 facade point clouds with various characteristics. The proposed method was robust to the facade conditions, with a mean intersection over union of 90.7%, and had substantially higher accuracy compared with the traditional separation methods, including region growing-, random sample consensus-, multipass random sample consensus-based, and hybrid methods, with mean intersection over union values of 69.53%, 49.52%, 63.93%, and 47.07%, respectively. Besides, the proposed method was general, since existing ground filtering methods (including the maximum slope, progressive morphology, and progressive triangular irregular network densification filters) can also perform well
    corecore