19 research outputs found

    Improved supervised learning-based approach for leaf and wood classification from LiDAR point clouds of forests

    Get PDF
    Accurately classifying 3-D point clouds into woody and leafy components has been an interest for applications in forestry and ecology including the better understanding of radiation transfer between canopy and atmosphere. The past decade has seen an increase in the methods attempting to classify leaves and wood in point clouds based on radiometric or geometric features. However, classification purely based on radiometric features is sensor-specific, and the method by which the local neighborhood of a point is defined affects the accuracy of classification based on geometric features. Here, we present a leaf-wood classification method combining geometrical features defined by radially bounded nearest neighbors at multiple spatial scales in a machine learning model. We compared the performance of three different machine learning models generated by the random forest (RF), XGBoost, and lightGBM algorithms. Using multiple spatial scales eliminates the need for an optimal neighborhood size selection and defining the local neighborhood by radially bounded nearest neighbors makes the method broadly applicable for point clouds of varying quality. We assessed the model performance at the individual tree- and plot-level on field data from tropical and deciduous forests, as well as on simulated point clouds. The method has an overall average accuracy of 94.2% on our data sets. For other data sets, the presented method outperformed the methods in literature in most cases without the need for additional postprocessing steps that are needed in most of the existing methods. We provide the entire framework as an open-source python package

    Leaf and wood classification framework for terrestrial LiDAR point clouds

    Get PDF
    Leaf and wood separation is a key step to allow a new range of estimates from Terrestrial LiDAR data, such as quantifying above-ground biomass, leaf and wood area and their 3D spatial distributions. We present a new method to separate leaf and wood from single tree point clouds automatically. Our approach combines unsupervised classification of geometric features and shortest path analysis. The automated separation algorithm and its intermediate steps are presented and validated. Validation consisted of using a testing framework with synthetic point clouds, simulated using ray-tracing and 3D tree models and 10 field scanned tree point clouds. To evaluate results we calculated accuracy, kappa coefficient and F-score. Validation using simulated data resulted in an overall accuracy of 0.83, ranging from 0.71 to 0.94. Per tree average accuracy from synthetic data ranged from 0.77 to 0.89. Field data results presented and overall average accuracy of 0.89. Analysis of each step showed accuracy ranging from 0.75 to 0.98. F-scores from both simulated and field data were similar, with scores from leaf usually higher than for wood. Our separation method showed results similar to others in literature, albeit from a completely automated workflow. Analysis of each separation step suggests that the addition of path analysis improved the robustness of our algorithm. Accuracy can be improved with per tree parameter optimization. The library containing our separation script can be easily installed and applied to single tree point cloud. Average processing times are below 10min for each tree

    Semi-automatic extraction of liana stems from terrestrial LiDAR point clouds of tropical rainforests

    Get PDF
    Lianas are key structural elements of tropical forests having a large impact on the global carbon cycle by reducing tree growth and increasing tree mortality. Despite the reported increasing abundance of lianas across neotropics, very few studies have attempted to quantify the impact of lianas on tree and forest structure. Recent advances in high resolution terrestrial laser scanning (TLS) systems have enabled us to quantify the forest structure, in an unprecedented detail. However, the uptake of TLS technology to study lianas has not kept up with the same pace as it has for trees. The slower technological adoption of TLS to study lianas is due to the lack of methods to study these complex growth forms. In this study, we present a semi-automatic method to extract liana woody components from plot-level TLS data of a tropical rainforest. We tested the method in eight plots from two different tropical rainforest sites (two in Gigante Peninsula, Panama and six in Nouragues, French Guiana) along an increasing gradient of liana infestation (from plots with low liana density to plots with very high liana density). Our method uses a machine learning model based on the Random Forest (RF) algorithm. The RF algorithm is trained on the eigen features extracted from the points in 3D at multiple spatial scales. The RF based liana stem extraction method successfully extracts on average 58% of liana woody points in our dataset with a high precision of 88%. We also present simple post-processing steps that increase the percentage of extracted liana stems from 54% to 90% in Nouragues and 65% to 70% in Gigante Peninsula without compromising on the precision. We provide the entire processing pipeline as an open source python package. Our method will facilitate new research to study lianas as it enables the monitoring of liana abundance, growth and biomass in forest plots. In addition, the method facilitates the easier processing of 3D data to study tree structure from a liana-infested forest

    Leaf and wood classification framework for terrestrial LiDAR point clouds

    Get PDF
    Methods in Ecology and Evolution published by John Wiley & Sons Ltd on behalf of British Ecological Society. Leaf and wood separation is a key step to allow a new range of estimates from Terrestrial LiDAR data, such as quantifying above-ground biomass, leaf and wood area and their 3D spatial distributions. We present a new method to separate leaf and wood from single tree point clouds automatically. Our approach combines unsupervised classification of geometric features and shortest path analysis. The automated separation algorithm and its intermediate steps are presented and validated. Validation consisted of using a testing framework with synthetic point clouds, simulated using ray-tracing and 3D tree models and 10 field scanned tree point clouds. To evaluate results we calculated accuracy, kappa coefficient and F-score. Validation using simulated data resulted in an overall accuracy of 0.83, ranging from 0.71 to 0.94. Per tree average accuracy from synthetic data ranged from 0.77 to 0.89. Field data results presented and overall average accuracy of 0.89. Analysis of each step showed accuracy ranging from 0.75 to 0.98. F-scores from both simulated and field data were similar, with scores from leaf usually higher than for wood. Our separation method showed results similar to others in literature, albeit from a completely automated workflow. Analysis of each separation step suggests that the addition of path analysis improved the robustness of our algorithm. Accuracy can be improved with per tree parameter optimization. The library containing our separation script can be easily installed and applied to single tree point cloud. Average processing times are below 10 min for each tree

    LiDAR-derived digital holograms for automotive head-up displays.

    Get PDF
    A holographic automotive head-up display was developed to project 2D and 3D ultra-high definition (UHD) images using LiDAR data in the driver's field of view. The LiDAR data was collected with a 3D terrestrial laser scanner and was converted to computer-generated holograms (CGHs). The reconstructions were obtained with a HeNe laser and a UHD spatial light modulator with a panel resolution of 3840Ă—2160 px for replay field projections. By decreasing the focal distance of the CGHs, the zero-order spot was diffused into the holographic replay field image. 3D holograms were observed floating as a ghost image at a variable focal distance with a digital Fresnel lens into the CGH and a concave lens.This project was funded by the EPSRC Centre for Doctoral Training in Connected Electronic and Photonic Systems (CEPS) (EP/S022139/1), Project Reference: 2249444

    Non-intersecting leaf insertion algorithm for tree structure models

    Get PDF
    We present an algorithm and an implementation to insert broadleaves or needleleaves to a quantitative structure model according to an arbitrary distribution, and a data structure to store the required information efficiently. A structure model contains the geometry and branching structure of a tree. The purpose of the work is to offer a tool for making more realistic simulations with tree models with leaves, particularly for tree models developed from terrestrial laser scan (TLS) measurements. We demonstrate leaf insertion using cylinder-based structure models, but the associated software implementation is written in a way that enables the easy use of other types of structure models. Distributions controlling leaf location, size and angles as well as the shape of individual leaves are user-definable, allowing any type of distribution. The leaf generation process consist of two stages, the first of which generates individual leaf geometry following the input distributions, while in the other stage intersections are prevented by doing transformations when required. Initial testing was carried out on English oak trees to demonstrate the approach and to assess the required computational resources. Depending on the size and complexity of the tree, leaf generation takes between 6 and 18 minutes. Various leaf area density distributions were defined, and the resulting leaf covers were compared to manual leaf harvesting measurements. The results are not conclusive, but they show great potential for the method. In the future, if our method is demonstrated to work well for TLS data from multiple tree types, the approach is likely to be very useful for 3D structure and radiative transfer simulation applications, including remote sensing, ecology and forestry, among others

    HIRIS (High-Resolution Imaging Spectrometer: Science opportunities for the 1990s. Earth observing system. Volume 2C: Instrument panel report

    Get PDF
    The high-resolution imaging spectrometer (HIRIS) is an Earth Observing System (EOS) sensor developed for high spatial and spectral resolution. It can acquire more information in the 0.4 to 2.5 micrometer spectral region than any other sensor yet envisioned. Its capability for critical sampling at high spatial resolution makes it an ideal complement to the MODIS (moderate-resolution imaging spectrometer) and HMMR (high-resolution multifrequency microwave radiometer), lower resolution sensors designed for repetitive coverage. With HIRIS it is possible to observe transient processes in a multistage remote sensing strategy for Earth observations on a global scale. The objectives, science requirements, and current sensor design of the HIRIS are discussed along with the synergism of the sensor with other EOS instruments and data handling and processing requirements

    Técnicas y usos en la clasificación automática de imágenes

    Get PDF
    The production and generation of visual information through mobile phones and cameras is enormous. Also and mainly through remote sensing, through the acquisition of images of the earth's surface by means of planes, spacecraft and satellites that capture and serve data on meteorology, oceanography, geology, geography, geolocation, security, and so on. These image capture instruments generate visual information every day that cannot be manually processed, which is why various techniques and methods are used for the automatic extraction of useful knowledge. This literature review aims to understand the techniques and uses of automatic classification of images. In order to do this, the Scopus and WoS databases were used to locate documents on the automatic classification of images published between 2008 and 2018. The resulting records were searched for their full texts, carrying out a content analysis to find out the most recurrent techniques and their applications. As a result, it becomes evident that the three most commonly used techniques for the automatic classification of images are decision trees, neural networks and support vector machines, with the application of a wide variety of automatic classification, which seeks to automate repetitive processes, inspection and complex surveillance, urban control and development or recognition and assessment after natural disasters, among other aspects

    Técnicas y usos en la clasificación automática de imágenes

    Get PDF
    The production and generation of visual information through mobile phones and cameras is enormous. Also and mainly through remote sensing, through the acquisition of images of the earth's surface by means of planes, spacecraft and satellites that capture and serve data on meteorology, oceanography, geology, geography, geolocation, security, and so on. These image capture instruments generate visual information every day that cannot be manually processed, which is why various techniques and methods are used for the automatic extraction of useful knowledge. This literature review aims to understand the techniques and uses of automatic classification of images. In order to do this, the Scopus and WoS databases were used to locate documents on the automatic classification of images published between 2008 and 2018. The resulting records were searched for their full texts, carrying out a content analysis to find out the most recurrent techniques and their applications. As a result, it becomes evident that the three most commonly used techniques for the automatic classification of images are decision trees, neural networks and support vector machines, with the application of a wide variety of automatic classification, which seeks to automate repetitive processes, inspection and complex surveillance, urban control and development or recognition and assessment after natural disasters, among other aspects
    corecore