78 research outputs found

    On the use of rapid-scan, low point density terrestrial laser scanning (TLS) for structural assessment of complex forest environments

    Get PDF
    Forests fulfill an important role in natural ecosystems, e.g., they provide food, fiber, habitat, and biodiversity, all of which contribute to stable ecosystems. Assessing and modeling the structure and characteristics in forests can lead to a better understanding and management of these resources. Traditional methods for collecting forest traits, known as “forest inventory”, is achieved using rough proxies, such as stem diameter, tree height, and foliar coverage; such parameters are limited in their ability to capture fine-scale structural variation in forest environments. It is in this context that terrestrial laser scanning (TLS) has come to the fore as a tool for addressing the limitations of traditional forest structure evaluation methods. However, there is a need for improving TLS data processing methods. In this work, we developed algorithms to assess the structure of complex forest environments – defined by their stem density, intricate root and stem structures, uneven-aged nature, and variable understory - using data collected by a low-cost, portable TLS system, the Compact Biomass Lidar (CBL). The objectives of this work are listed as follow: 1. Assess the utility of terrestrial lidar scanning (TLS) to accurately map elevation changes (sediment accretion rates) in mangrove forest; 2. Evaluate forest structural attributes, e.g., stems and roots, in complex forest environments toward biophysical characterization of such forests; and 3. Assess canopy-level structural traits (leaf area index; leaf area density) in complex forest environments to estimate biomass in rapidly changing environments. The low-cost system used in this research provides lower-resolution data, in terms of scan angular resolution and resulting point density, when compared to higher-cost commercial systems. As a result, the algorithms developed for evaluating the data collected by such systems should be robust to issues caused by low-resolution 3D point cloud data. The data used in various parts of this work were collected from three mangrove forests on the western Pacific island of Pohnpei in the Federated States of Micronesia, as well as tropical forests in Hawai’i, USA. Mangrove forests underscore the economy of this region, where more than half of the annual household income is derived from these forests. However, these mangrove forests are endangered by sea level rise, which necessitates an evaluation of the resilience of mangrove forests to climate change in order to better protect and manage these ecosystems. This includes the preservation of positive sediment accretion rates, and stimulating the process of root growth, sedimentation, and peat development, all of which are influenced by the forest floor elevation, relative to sea level. Currently, accretion rates are measured using surface elevation tables (SETs), which are posts permanently placed in mangrove sediments. The forest floor is measured annually with respect to the height of the SETs to evaluate changes in elevation (Cahoon et al. 2002). In this work, we evaluated the ability of the CBL system for measuring such elevation changes, to address objective #1. Digital Elevation Models (DEMs) were produced for plots, based on the point cloud resulted from co-registering eight scans, spaced 45 degree, per plot. DEMs are refined and produced using Cloth Simulation Filtering (CSF) and kriging interpolation. CSF was used because it minimizes the user input parameters, and kriging was chosen for this study due its consideration of the overall spatial arrangement of the points using semivariogram analysis, which results in a more robust model. The average consistency of the TLS-derived elevation change was 72%, with and RMSE value of 1.36 mm. However, what truly makes the TLS method more tenable, is the lower standard error (SE) values when compared to manual methods (10-70x lower). In order to achieve our second objective, we assessed structural characteristics of the above-mentioned mangrove forest and also for tropical forests in Hawaii, collected with the same CBL scanner. The same eight scans per plot (20 plots) were co-registered using pairwise registration and the Iterative Closest Point (ICP). We then removed the higher canopy using a normal change rate assessment algorithm. We used a combination of geometric classification techniques, based on the angular orientation of the planes fitted to points (facets), and machine learning 3D segmentation algorithms to detect tree stems and above-ground roots. Mangrove forests are complex forest environments, containing above-ground root mass, which can create confusion for both ground detection and structural assessment algorithms. As a result, we needed to train a supporting classifier on the roots to detect which root lidar returns were classified as stems. The accuracy and precision values for this classifier were assessed via manual investigation of the classification results in all 20 plots. The accuracy and precision for stem classification were found to be 82% and 77%, respectively. The same values for root detection were 76% and 68%, respectively. We simulated the stems using alpha shapes in order to assess their volume in the final step. The consistency of the volume evaluation was found to be 85%. This was obtained by comparing the mean stem volume (m3/ha) from field data and the TLS data in each plot. The reported accuracy is the average value for all 20 plots. Additionally, we compared the diameter-at-breast-height (DBH), recorded in the field, with the TLS-derived DBH to obtain a direct measure of the precision of our stem models. DBH evaluation resulted in an accuracy of 74% and RMSE equaled 7.52 cm. This approach can be used for automatic stem detection and structural assessment in a complex forest environment, and could contribute to biomass assessment in these rapidly changing environments. These stem and root structural assessment efforts were complemented by efforts to estimate canopy-level structural attributes of the tropical Hawai’i forest environment; we specifically estimated the leaf area index (LAI), by implementing a density-based approach. 242 scans were collected using the portable low-cost TLS (CBL), in a Hawaii Volcano National Park (HAVO) flux tower site. LAI was measured for all the plots in the site, using an AccuPAR LP-80 Instrument. The first step in this work involved detection of the higher canopy, using normal change rate assessment. After segmenting the higher canopy from the lidar point clouds, we needed to measure Leaf Area Density (LAD), using a voxel-based approach. We divided the canopy point cloud into five layers in the Z direction, after which each of these five layers were divided into voxels in the X direction. The sizes of these voxels were constrained based on interquartile analysis and the number of points in each voxel. We hypothesized that the power returned to the lidar system from woody materials, like branches, exceeds that from leaves, due to the liquid water absorption of the leaves and higher reflectivity for woody material at the 905 nm lidar wavelength. We evaluated leafy and woody materials using images from projected point clouds and determined the density of these regions to support our hypothesis. The density of points in a 3D grid size of 0.1 m, which was determined by investigating the size of the branches in the lower portion of the higher canopy, was calculated in each of the voxels. Note that “density” in this work is defined as the total number of points per grid cell, divided by the volume of that cell. Subsequently, we fitted a kernel density estimator to these values. The threshold was set based on half of the area under the curve in each of the distributions. The grid cells with a density below the threshold were labeled as leaves, while those cells with a density above the threshold were set as non-leaves. We then modeled the LAI using the point densities derived from TLS point clouds, achieving a R2 value of 0.88. We also estimated the LAI directly from lidar data by using the point densities and calculating leaf area density (LAD), which is defined as the total one-sided leaf area per unit volume. LAI can be obtained as the sum of the LAD values in all the voxels. The accuracy of LAI estimation was found to be 90%. Since the LAI values cannot be considered spatially independent throughout all the plots in this site, we performed a semivariogram analysis on the field-measured LAI data. This analysis showed that the LAI values can be assumed to be independent in plots that are at least 30 m apart. As a result, we divided the data into six subsets, where each of the plots were 30 meter spaced for each subset. LAI model R2 values for these subsets ranged between 0.84 - 0.96. The results bode well for using this method for automatic estimation of LAI values in complex forest environments, using a low-cost, low point density, rapid-scan TLS

    Generation of Horizontally Curved Driving Lines for Autonomous Vehicles Using Mobile Laser Scanning Data

    Get PDF
    The development of autonomous vehicle desiderates tremendous advances in three-dimensional (3D) high-definition roadmaps. These roadmaps are capable of providing 3D positioning information with 10-to-20 cm accuracy. With the assistance of 3D high-definition roadmaps, the intractable autonomous driving problem is transformed into a solvable localization issue. The Mobile Laser Scanning (MLS) systems can collect accurate, high-density 3D point clouds in road environments for generating 3D high-definition roadmaps. However, few studies have been concentrated on the driving line generation from 3D MLS point clouds for highly autonomous driving, particularly for accident-prone horizontal curves with the problems of ambiguous traffic situations and unclear visual clues. This thesis attempts to develop an effective method for semi-automated generation of horizontally curved driving lines using MLS data. The framework of research methodology proposed in this thesis consists of three steps, including road surface extraction, road marking extraction, and driving line generation. Firstly, the points covering road surface are extracted using curb-based road surface extraction algorithms depending on both the elevation and slope differences. Then, road markings are identified and extracted according to a sequence of algorithms consisting of geo-referenced intensity image generation, multi-threshold road marking extraction, and statistical outlier removal. Finally, the conditional Euclidean clustering algorithm is employed followed by the nonlinear least-squares curve-fitting algorithm for generating horizontally curved driving lines. A total of six test datasets obtained in Xiamen, China by a RIEGL VMX-450 system were used to evaluate the performance and efficiency of the proposed methodology. The experimental results demonstrate that the proposed road marking extraction algorithms can achieve 90.89% in recall, 93.04% in precision and 91.95% in F1-score, respectively. Moreover, the unmanned aerial vehicle (UAV) imagery with 4 cm was used for validation of the proposed driving line generation algorithms. The validation results demonstrate that the horizontally curved driving lines can be effectively generated within 15 cm-level localization accuracy using MLS point clouds. Finally, a comparative study was conducted both visually and quantitatively to indicate the accuracy and reliability of the generated driving lines

    A Survey of Surface Reconstruction from Point Clouds

    Get PDF
    International audienceThe area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contains a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations – not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections, and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques, and provide directions for future work in surface reconstruction

    Adaptive Methods for Point Cloud and Mesh Processing

    Get PDF
    Point clouds and 3D meshes are widely used in numerous applications ranging from games to virtual reality to autonomous vehicles. This dissertation proposes several approaches for noise removal and calibration of noisy point cloud data and 3D mesh sharpening methods. Order statistic filters have been proven to be very successful in image processing and other domains as well. Different variations of order statistics filters originally proposed for image processing are extended to point cloud filtering in this dissertation. A brand-new adaptive vector median is proposed in this dissertation for removing noise and outliers from noisy point cloud data. The major contributions of this research lie in four aspects: 1) Four order statistic algorithms are extended, and one adaptive filtering method is proposed for the noisy point cloud with improved results such as preserving significant features. These methods are applied to standard models as well as synthetic models, and real scenes, 2) A hardware acceleration of the proposed method using Microsoft parallel pattern library for filtering point clouds is implemented using multicore processors, 3) A new method for aerial LIDAR data filtering is proposed. The objective is to develop a method to enable automatic extraction of ground points from aerial LIDAR data with minimal human intervention, and 4) A novel method for mesh color sharpening using the discrete Laplace-Beltrami operator is proposed. Median and order statistics-based filters are widely used in signal processing and image processing because they can easily remove outlier noise and preserve important features. This dissertation demonstrates a wide range of results with median filter, vector median filter, fuzzy vector median filter, adaptive mean, adaptive median, and adaptive vector median filter on point cloud data. The experiments show that large-scale noise is removed while preserving important features of the point cloud with reasonable computation time. Quantitative criteria (e.g., complexity, Hausdorff distance, and the root mean squared error (RMSE)), as well as qualitative criteria (e.g., the perceived visual quality of the processed point cloud), are employed to assess the performance of the filters in various cases corrupted by different noisy models. The adaptive vector median is further optimized for denoising or ground filtering aerial LIDAR data point cloud. The adaptive vector median is also accelerated on multi-core CPUs using Microsoft Parallel Patterns Library. In addition, this dissertation presents a new method for mesh color sharpening using the discrete Laplace-Beltrami operator, which is an approximation of second order derivatives on irregular 3D meshes. The one-ring neighborhood is utilized to compute the Laplace-Beltrami operator. The color for each vertex is updated by adding the Laplace-Beltrami operator of the vertex color weighted by a factor to its original value. Different discretizations of the Laplace-Beltrami operator have been proposed for geometrical processing of 3D meshes. This work utilizes several discretizations of the Laplace-Beltrami operator for sharpening 3D mesh colors and compares their performance. Experimental results demonstrated the effectiveness of the proposed algorithms

    Reconstruction de formes tubulaires à partir de nuages de points : application à l’estimation de la géométrie forestière

    Get PDF
    Les capacités des technologies de télédétection ont augmenté exponentiellement au cours des dernières années : de nouveaux scanners fournissent maintenant une représentation géométrique de leur environnement sous la forme de nuage de points avec une précision jusqu'ici inégalée. Le traitement de nuages de points est donc devenu une discipline à part entière avec ses problématiques propres et de nombreux défis à relever. Le coeur de cette thèse porte sur la modélisation géométrique et introduit une méthode robuste d'extraction de formes tubulaires à partir de nuages de points. Nous avons choisi de tester nos méthodes dans le contexte applicatif difficile de la foresterie pour mettre en valeur la robustesse de nos algorithmes et leur application à des données volumineuses. Nos méthodes intègrent les normales aux points comme information supplémentaire pour atteindre les objectifs de performance nécessaire au traitement de nuages de points volumineux.Cependant, ces normales ne sont généralement pas fournies par les capteurs, il est donc nécessaire de les pré-calculer.Pour préserver la rapidité d'exécution, notre premier développement a donc consisté à présenter une méthode rapide d'estimation de normales. Pour ce faire nous avons approximé localement la géométrie du nuage de points en utilisant des "patchs" lisses dont la taille s'adapte à la complexité locale des nuages de points. Nos travaux se sont ensuite concentrés sur l’extraction robuste de formes tubulaires dans des nuages de points denses, occlus, bruités et de densité inhomogène. Dans cette optique, nous avons développé une variante de la transformée de Hough dont la complexité est réduite grâce aux normales calculées. Nous avons ensuite couplé ces travaux à une proposition de contours actifs indépendants de leur paramétrisation. Cette combinaison assure la cohérence interne des formes reconstruites et s’affranchit ainsi des problèmes liés à l'occlusion, au bruit et aux variations de densité. Notre méthode a été validée en environnement complexe forestier pour reconstruire des troncs d'arbre afin d'en relever les qualités par comparaison à des méthodes existantes. La reconstruction de troncs d'arbre ouvre d'autres questions à mi-chemin entre foresterie et géométrie. La segmentation des arbres d'une placette forestière est l'une d’entre elles. C'est pourquoi nous proposons également une méthode de segmentation conçue pour contourner les défauts des nuages de points forestiers et isoler les différents objets d'un jeu de données. Durant nos travaux nous avons utilisé des approches de modélisation pour répondre à des questions géométriques, et nous les avons appliqué à des problématiques forestières.Il en résulte un pipeline de traitements cohérent qui, bien qu'illustré sur des données forestières, est applicable dans des contextes variés.Abstract : The potential of remote sensing technologies has recently increased exponentially: new sensors now provide a geometric representation of their environment in the form of point clouds with unrivalled accuracy. Point cloud processing hence became a full discipline, including specific problems and many challenges to face. The core of this thesis concerns geometric modelling and introduces a fast and robust method for the extraction of tubular shapes from point clouds. We hence chose to test our method in the difficult applicative context of forestry in order to highlight the robustness of our algorithms and their application to large data sets. Our methods integrate normal vectors as a supplementary geometric information in order to achieve the performance goal necessary for large point cloud processing. However, remote sensing techniques do not commonly provide normal vectors, thus they have to be computed. Our first development hence consisted in the development of a fast normal estimation method on point cloud in order to reduce the computing time on large point clouds. To do so, we locally approximated the point cloud geometry using smooth ''patches`` of points which size adapts to the local complexity of the point cloud geometry. We then focused our work on the robust extraction of tubular shapes from dense, occluded, noisy point clouds suffering from non-homogeneous sampling density. For this objective, we developed a variant of the Hough transform which complexity is reduced thanks to the computed normal vectors. We then combined this research with a new definition of parametrisation-invariant active contours. This combination ensures the internal coherence of the reconstructed shapes and alleviates issues related to occlusion, noise and variation of sampling density. We validated our method in complex forest environments with the reconstruction of tree stems to emphasize its advantages and compare it to existing methods. Tree stem reconstruction also opens new perspectives halfway in between forestry and geometry. One of them is the segmentation of trees from a forest plot. Therefore we also propose a segmentation approach designed to overcome the defects of forest point clouds and capable of isolating objects inside a point cloud. During our work we used modelling approaches to answer geometric questions and we applied our methods to forestry problems. Therefore, our studies result in a processing pipeline adapted to forest point cloud analyses, but the general geometric algorithms we propose can also be applied in various contexts

    Edge detection in unorganized 3D point cloud

    Get PDF
    The application of 3D laser scanning in the mining industry is increasing progressively over the years. This presents an opportunity to visualize and analyze the underground world and potentially save countless man- hours and exposure to safety incidents. This thesis envisions to detect the “Edges of the Rocks” in the 3D point cloud collected via scanner, although edge detection in point cloud is considered as a difficult but meaningful problem. As a solution to noisy and unorganized 3D point cloud, a new method, EdgeScan method, has been proposed and implemented to detect fast and accurate edges from the 3D point cloud for real time systems. EdgeScan method is aimed to make use of 2D edge processing techniques to represent the edge characteristics in 3D point cloud with better accuracy. A comparisons of EdgeScan method with other common edge detection methods for 3D point cloud is administered, eventually, results suggest that the stated EdgeScan method furnishes a better speed and accuracy especially for large dataset in real time systems.Master of Science (MSc) in Computational Science

    Consistent Density Scanning and Information Extraction From Point Clouds of Building Interiors

    Get PDF
    Over the last decade, 3D range scanning systems have improved considerably enabling the designers to capture large and complex domains such as building interiors. The captured point cloud is processed to extract specific Building Information Models, where the main research challenge is to simultaneously handle huge and cohesive point clouds representing multiple objects, occluded features and vast geometric diversity. These domain characteristics increase the data complexities and thus make it difficult to extract accurate information models from the captured point clouds. The research work presented in this thesis improves the information extraction pipeline with the development of novel algorithms for consistent density scanning and information extraction automation for building interiors. A restricted density-based, scan planning methodology computes the number of scans to cover large linear domains while ensuring desired data density and reducing rigorous post-processing of data sets. The research work further develops effective algorithms to transform the captured data into information models in terms of domain features (layouts), meaningful data clusters (segmented data) and specific shape attributes (occluded boundaries) having better practical utility. Initially, a direct point-based simplification and layout extraction algorithm is presented that can handle the cohesive point clouds by adaptive simplification and an accurate layout extraction approach without generating an intermediate model. Further, three information extraction algorithms are presented that transforms point clouds into meaningful clusters. The novelty of these algorithms lies in the fact that they work directly on point clouds by exploiting their inherent characteristic. First a rapid data clustering algorithm is presented to quickly identify objects in the scanned scene using a robust hue, saturation and value (H S V) color model for better scene understanding. A hierarchical clustering algorithm is developed to handle the vast geometric diversity ranging from planar walls to complex freeform objects. The shape adaptive parameters help to segment planar as well as complex interiors whereas combining color and geometry based segmentation criterion improves clustering reliability and identifies unique clusters from geometrically similar regions. Finally, a progressive scan line based, side-ratio constraint algorithm is presented to identify occluded boundary data points by investigating their spatial discontinuity

    Efficient 3D Segmentation, Registration and Mapping for Mobile Robots

    Get PDF
    Sometimes simple is better! For certain situations and tasks, simple but robust methods can achieve the same or better results in the same or less time than related sophisticated approaches. In the context of robots operating in real-world environments, key challenges are perceiving objects of interest and obstacles as well as building maps of the environment and localizing therein. The goal of this thesis is to carefully analyze such problem formulations, to deduce valid assumptions and simplifications, and to develop simple solutions that are both robust and fast. All approaches make use of sensors capturing 3D information, such as consumer RGBD cameras. Comparative evaluations show the performance of the developed approaches. For identifying objects and regions of interest in manipulation tasks, a real-time object segmentation pipeline is proposed. It exploits several common assumptions of manipulation tasks such as objects being on horizontal support surfaces (and well separated). It achieves real-time performance by using particularly efficient approximations in the individual processing steps, subsampling the input data where possible, and processing only relevant subsets of the data. The resulting pipeline segments 3D input data with up to 30Hz. In order to obtain complete segmentations of the 3D input data, a second pipeline is proposed that approximates the sampled surface, smooths the underlying data, and segments the smoothed surface into coherent regions belonging to the same geometric primitive. It uses different primitive models and can reliably segment input data into planes, cylinders and spheres. A thorough comparative evaluation shows state-of-the-art performance while computing such segmentations in near real-time. The second part of the thesis addresses the registration of 3D input data, i.e., consistently aligning input captured from different view poses. Several methods are presented for different types of input data. For the particular application of mapping with micro aerial vehicles where the 3D input data is particularly sparse, a pipeline is proposed that uses the same approximate surface reconstruction to exploit the measurement topology and a surface-to-surface registration algorithm that robustly aligns the data. Optimization of the resulting graph of determined view poses then yields globally consistent 3D maps. For sequences of RGBD data this pipeline is extended to include additional subsampling steps and an initial alignment of the data in local windows in the pose graph. In both cases, comparative evaluations show a robust and fast alignment of the input data

    Semantic segmentation of outdoor scenes using LIDAR cloud point

    Get PDF
    In this paper we present a novel street scene semantic recognition framework, which takes advantage of 3D point clouds captured by a high definition LiDAR laser scanner. An important problem in object recognition is the need for sufficient labeled training data to learn robust classifiers. In this paper we show how to significantly re-duce the need for manually labeled training data by reduction of scene complexity using non-supervised ground and building segmentation. Our system first automatically seg-ments grounds point cloud, this is because the ground connects almost all other objects and we will use a connect component based algorithm to over segment the point clouds. Then, using binary range image processing building facades will be detected. Remained point cloud will grouped into voxels which are then transformed to super voxels. Local 3D features extracted from super voxels are classified by trained boosted decision trees and labeled with semantic classes e.g. tree, pedestrian, car. Given labeled 3D points cloud and 2D image with known viewing camera pose, the proposed association module aligned collections of 3D points to the groups of 2D image pixel to parsing 2D cubic images. One noticeable advantage of our method is the robustness to different lighting condition, shadows and city landscape. The proposed method is evaluated both quantitatively and qualitatively on a challenging fixed-position Terrestrial Laser Scanning (TLS) Velodyne data set and Mobile Laser Scanning (MLS), NAVTEQ True databases. Robust scene parsing results are reported
    • …
    corecore