13 research outputs found

    SINGLE TREE DELINEATION USING AIRBORNE LIDAR DATA

    Get PDF
    In this paper, single tree extraction was carried out using the first and last pulse airborne LIght Detection And Ranging (LIDAR) data. The LIDAR data was collected from TopoSys in May 2007 in the Milicz forest district, Poland, with a density of 7 points m-2. The total study area contains 25 circular plots of different radius according to the age of the trees. The absolute height of each point was obtained by normalizing the LIDAR raw data points using a digital terrain model (DTM) of the area. The value of σ used while smoothing was found higher for the deciduous tree dominating plots as compared to the coniferous plots. A modified k-means clustering algorithm was applied to extract the clusters of single tree above 4m height in each plot from the normalized LIDAR point clouds. 3-D convex polytope reconstruction from the extracted clusters of each tree was carried out using QHull algorithm. The validated result shows that an average of nearly 86% of the matured deciduous and 93% of the matured coniferous trees were extracted by the presented approach. Almost equal average accuracies were obtained in the case of young deciduous and coniferous tree species (58%). It seems that the algorithm did not work well with relatively younger tree types even after varying the parameters at pre-processing steps. The study showed that the adjustment of certain parameters like threshold distance, smoothing factor and scaling factor for the height before initialising the main process, has a substantial impact on the number and shape of the trees to be extracted more appropriately by applying the modified k-means procedure. There is a future scope of improving and testing the algorithm with different density of LIDAR data in different forest conditions

    Towards Urban Scene Semantic Segmentation with Deep Learning from LiDAR Point Clouds: A Case Study in Baden-Württemberg, Germany

    No full text
    An accurate understanding of urban objects is critical for urban modeling, intelligent infrastructure planning and city management. The semantic segmentation of light detection and ranging (LiDAR) point clouds is a fundamental approach for urban scene analysis. Over the last years, several methods have been developed to segment urban furniture with point clouds. However, the traditional processing of large amounts of spatial data has become increasingly costly, both time-wise and financially. Recently, deep learning (DL) techniques have been increasingly used for 3D segmentation tasks. Yet, most of these deep neural networks (DNNs) were conducted on benchmarks. It is, therefore, arguable whether DL approaches can achieve the state-of-the-art performance of 3D point clouds segmentation in real-life scenarios. In this research, we apply an adapted DNN (ARandLA-Net) to directly process large-scale point clouds. In particular, we develop a new paradigm for training and validation, which presents a typical urban scene in central Europe (Munzingen, Freiburg, Baden-Württemberg, Germany). Our dataset consists of nearly 390 million dense points acquired by Mobile Laser Scanning (MLS), which has a rather larger quantity of sample points in comparison to existing datasets and includes meaningful object categories that are particular to applications for smart cities and urban planning. We further assess the DNN on our dataset and investigate a number of key challenges from varying aspects, such as data preparation strategies, the advantage of color information and the unbalanced class distribution in the real world. The final segmentation model achieved a mean Intersection-over-Union (mIoU) score of 54.4% and an overall accuracy score of 83.9%. Our experiments indicated that different data preparation strategies influenced the model performance. Additional RGB information yielded an approximately 4% higher mIoU score. Our results also demonstrate that the use of weighted cross-entropy with inverse square root frequency loss led to better segmentation performance than when other losses were considered

    A Lidar Point Cloud Based Procedure for Vertical Canopy Structure Analysis And 3D Single Tree Modelling in Forest

    Get PDF
    A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived

    Algorithmen für die automatische Erkennung von Blattstörungen - eine geometrische Merkmalsextraktion zur Bewertung der Zustände einzelner Blätter

    No full text
    Fast developments in the sector of unmanned aerial vehicles (UAV) opened new opportunities for tree assessments. Very high resolution imagery of trees can be acquired from low cruising UAVs. With these high-resolution images and the help of different image processing methods, we are enabled to estimate the states of single leaves based on a geometrical feature extraction of disturbances on the leaves. In this work, an efficient way to identify small disturbances and irritations on single leaves with the help of blob detection algorithms and different filters will be proposed. Utilizing RGB images of maple leaves with disturbances and white blotches on their surface. These irritations can be caused by different reasons. Applying the algorithms mentioned above on gray scale images leads to a geometrical feature extraction of the white blotches especially when using the red and green channels of RGB images. A combination of the red and green channel and filters to smoothen the images increased the accuracy of the results. Furthermore, different characteristics of the blotches were used to eliminate false hits
    corecore