14 research outputs found

    COMPARISON OF ALGORITHMS FOR CONSTRUCTION DETECTION USING AIRBORNE LASER SCANNING AND NDSM CLASSIFICATION

    Get PDF
    Traditional approach to classify the point cloud of airborne laser scanning is based on the processing of a normalized digital surface model (nDSM), when ground facilities are detected and classified. The main feature to detect a ground facility is height difference between adjacent points. The simplest method to extract a ground facility is region-growing algorithm, which applies threshold to identify the connection between two points. Region growing algorithm is working with the constant value of height difference. Therefore, it is not applicable due to diverse conditions of earth surface, when height difference must be defined for each region separately. As result, researchers propose hierarchical, statistical and cluster methods to solve this problem. The study goal is to compare four algorithms to generate nDSM: region growing, progressive morphological filter, adaptive TIN surfaces and graph-cut. The experiment is divided into two stages: 1) to calculate the number of detected and lost buildings in nDSM; 2) to measure the classification accuracy of extracted shapes. The experiment results have showed that progressive morphological filter and graph-cut provides the minimal loss of buildings (only 1%). The most effective algorithm for ground facility detection is the graph-cut (total accuracy 0.95, Cohen’s Kappa 0.89, F1 score 0.93)

    Weakly-Supervised Semantic Segmentation of Airborne LiDAR Point Clouds in Hong Kong Urban Areas

    Get PDF
    Semantic segmentation of airborne LiDAR point clouds of urban areas is an essential process prior to applying LiDAR data to further applications such as 3D city modeling. Large-scale point cloud semantic segmentation is challenging in practical applications due to the massive data size and time-consuming point-wise annotation. This paper applied weakly-supervised Semantic Query Network and sparse points annotation pipeline to practical airborne LiDAR datasets for urban scene semantic segmentation in Hong Kong. The experiment result obtained the overall accuracy over 84% and the mean intersect over union over 75%. The capacity of intensity and return attributes of LiDAR data to classify the vegetation and construction was explored and discussed. This work demonstrates an efficient workflow of large-scale airborne LiDAR point cloud semantic segmentation in practice

    Density-Aware Convolutional Networks with Context Encoding for Airborne LiDAR Point Cloud Classification

    Full text link
    To better address challenging issues of the irregularity and inhomogeneity inherently present in 3D point clouds, researchers have been shifting their focus from the design of hand-craft point feature towards the learning of 3D point signatures using deep neural networks for 3D point cloud classification. Recent proposed deep learning based point cloud classification methods either apply 2D CNN on projected feature images or apply 1D convolutional layers directly on raw point sets. These methods cannot adequately recognize fine-grained local structures caused by the uneven density distribution of the point cloud data. In this paper, to address this challenging issue, we introduced a density-aware convolution module which uses the point-wise density to re-weight the learnable weights of convolution kernels. The proposed convolution module is able to fully approximate the 3D continuous convolution on unevenly distributed 3D point sets. Based on this convolution module, we further developed a multi-scale fully convolutional neural network with downsampling and upsampling blocks to enable hierarchical point feature learning. In addition, to regularize the global semantic context, we implemented a context encoding module to predict a global context encoding and formulated a context encoding regularizer to enforce the predicted context encoding to be aligned with the ground truth one. The overall network can be trained in an end-to-end fashion with the raw 3D coordinates as well as the height above ground as inputs. Experiments on the International Society for Photogrammetry and Remote Sensing (ISPRS) 3D labeling benchmark demonstrated the superiority of the proposed method for point cloud classification. Our model achieved a new state-of-the-art performance with an average F1 score of 71.2% and improved the performance by a large margin on several categories

    EXPLORING ALS AND DIM DATA FOR SEMANTIC SEGMENTATION USING CNNS

    Get PDF
    Over the past years, the algorithms for dense image matching (DIM) to obtain point clouds from aerial images improved significantly. Consequently, DIM point clouds are now a good alternative to the established Airborne Laser Scanning (ALS) point clouds for remote sensing applications. In order to derive high-level applications such as digital terrain models or city models, each point within a point cloud must be assigned a class label. Usually, ALS and DIM are labelled with different classifiers due to their varying characteristics. In this work, we explore both point cloud types in a fully convolutional encoder-decoder network, which learns to classify ALS as well as DIM point clouds. As input, we project the point clouds onto a 2D image raster plane and calculate the minimal, average and maximal height values for each raster cell. The network then differentiates between the classes ground, non-ground, building and no data. We test our network in six training setups using only one point cloud type, both point clouds as well as several transfer-learning approaches. We quantitatively and qualitatively compare all results and discuss the advantages and disadvantages of all setups. The best network achieves an overall accuracy of 96 % in an ALS and 83 % in a DIM test set

    Joint classification of ALS and DIM point clouds

    Get PDF
    National mapping agencies (NMAs) have to acquire nation-wide Digital Terrain Models on a regular basis as part of their obligations to provide up-to-date data. Point clouds from Airborne Laser Scanning (ALS) are an important data source for this task; recently, NMAs also started deriving Dense Image Matching (DIM) point clouds from aerial images. As a result, NMAs have both point cloud data sources available, which they can exploit for their purposes. In this study, we investigate the potential of transfer learning from ALS to DIM data, so the time consuming step of data labelling can be reduced. Due to their specific individual measurement techniques, both point clouds have various distinct properties such as RGB or intensity values, which are often exploited for classification of either ALS or DIM point clouds. However, those features also hinder transfer learning between these two point cloud types, since they do not exist in the other point cloud type. As the mere 3D point is available in both point cloud types, we focus on transfer learning from an ALS to a DIM point cloud using exclusively the point coordinates. We are tackling the issue of different point densities by rasterizing the point cloud into a 2D grid and take important height features as input for classification. We train an encoder-decoder convolutional neural network with labelled ALS data as a baseline and then fine-tune this baseline with an increasing amount of labelled DIM data. We also train the same network exclusively on all available DIM data as reference to compare our results. We show that only 10% of labelled DIM data increase the classification results notably, which is especially relevant for practical applications
    corecore