53 research outputs found

    Automatic Retrieval of Skeletal Structures of Trees from Terrestrial Laser Scanner Data

    Get PDF
    Research on forest ecosystems receives high attention, especially nowadays with regard to sustainable management of renewable resources and the climate change. In particular, accurate information on the 3D structure of a tree is important for forest science and bioclimatology, but also in the scope of commercial applications. Conventional methods to measure geometric plant features are labor- and time-intensive. For detailed analysis, trees have to be cut down, which is often undesirable. Here, Terrestrial Laser Scanning (TLS) provides a particularly attractive tool because of its contactless measurement technique. The object geometry is reproduced as a 3D point cloud. The objective of this thesis is the automatic retrieval of the spatial structure of trees from TLS data. We focus on forest scenes with comparably high stand density and with many occlusions resulting from it. The varying level of detail of TLS data poses a big challenge. We present two fully automatic methods to obtain skeletal structures from scanned trees that have complementary properties. First, we explain a method that retrieves the entire tree skeleton from 3D data of co-registered scans. The branching structure is obtained from a voxel space representation by searching paths from branch tips to the trunk. The trunk is determined in advance from the 3D points. The skeleton of a tree is generated as a 3D line graph. Besides 3D coordinates and range, a scan provides 2D indices from the intensity image for each measurement. This is exploited in the second method that processes individual scans. Furthermore, we introduce a novel concept to manage TLS data that facilitated the researchwork. Initially, the range image is segmented into connected components. We describe a procedure to retrieve the boundary of a component that is capable of tracing inner depth discontinuities. A 2D skeleton is generated from the boundary information and used to decompose the component into sub components. A Principal Curve is computed from the 3D point set that is associated with a sub component. The skeletal structure of a connected component is summarized as a set of polylines. Objective evaluation of the results remains an open problem because the task itself is ill-defined: There exists no clear definition of what the true skeleton should be w.r.t. a given point set. Consequently, we are not able to assess the correctness of the methods quantitatively, but have to rely on visual assessment of results and provide a thorough discussion of the particularities of both methods. We present experiment results of both methods. The first method efficiently retrieves full skeletons of trees, which approximate the branching structure. The level of detail is mainly governed by the voxel space and therefore, smaller branches are reproduced inadequately. The second method retrieves partial skeletons of a tree with high reproduction accuracy. The method is sensitive to noise in the boundary, but the results are very promising. There are plenty of possibilities to enhance the method’s robustness. The combination of the strengths of both presented methods needs to be investigated further and may lead to a robust way to obtain complete tree skeletons from TLS data automatically.Die Erforschung des ÖkosystemsWald spielt gerade heutzutage im Hinblick auf den nachhaltigen Umgang mit nachwachsenden Rohstoffen und den Klimawandel eine große Rolle. Insbesondere die exakte Beschreibung der dreidimensionalen Struktur eines Baumes ist wichtig für die Forstwissenschaften und Bioklimatologie, aber auch im Rahmen kommerzieller Anwendungen. Die konventionellen Methoden um geometrische Pflanzenmerkmale zu messen sind arbeitsintensiv und zeitaufwändig. Für eine genaue Analyse müssen Bäume gefällt werden, was oft unerwünscht ist. Hierbei bietet sich das Terrestrische Laserscanning (TLS) als besonders attraktives Werkzeug aufgrund seines kontaktlosen Messprinzips an. Die Objektgeometrie wird als 3D-Punktwolke wiedergegeben. Basierend darauf ist das Ziel der Arbeit die automatische Bestimmung der räumlichen Baumstruktur aus TLS-Daten. Der Fokus liegt dabei auf Waldszenen mit vergleichsweise hoher Bestandesdichte und mit zahlreichen daraus resultierenden Verdeckungen. Die Auswertung dieser TLS-Daten, die einen unterschiedlichen Grad an Detailreichtum aufweisen, stellt eine große Herausforderung dar. Zwei vollautomatische Methoden zur Generierung von Skelettstrukturen von gescannten Bäumen, welche komplementäre Eigenschaften besitzen, werden vorgestellt. Bei der ersten Methode wird das Gesamtskelett eines Baumes aus 3D-Daten von registrierten Scans bestimmt. Die Aststruktur wird von einer Voxelraum-Repräsentation abgeleitet indem Pfade von Astspitzen zum Stamm gesucht werden. Der Stamm wird im Voraus aus den 3D-Punkten rekonstruiert. Das Baumskelett wird als 3D-Liniengraph erzeugt. Für jeden gemessenen Punkt stellt ein Scan neben 3D-Koordinaten und Distanzwerten auch 2D-Indizes zur Verfügung, die sich aus dem Intensitätsbild ergeben. Bei der zweiten Methode, die auf Einzelscans arbeitet, wird dies ausgenutzt. Außerdem wird ein neuartiges Konzept zum Management von TLS-Daten beschrieben, welches die Forschungsarbeit erleichtert hat. Zunächst wird das Tiefenbild in Komponenten aufgeteilt. Es wird eine Prozedur zur Bestimmung von Komponentenkonturen vorgestellt, die in der Lage ist innere Tiefendiskontinuitäten zu verfolgen. Von der Konturinformation wird ein 2D-Skelett generiert, welches benutzt wird um die Komponente in Teilkomponenten zu zerlegen. Von der 3D-Punktmenge, die mit einer Teilkomponente assoziiert ist, wird eine Principal Curve berechnet. Die Skelettstruktur einer Komponente im Tiefenbild wird als Menge von Polylinien zusammengefasst. Die objektive Evaluation der Resultate stellt weiterhin ein ungelöstes Problem dar, weil die Aufgabe selbst nicht klar erfassbar ist: Es existiert keine eindeutige Definition davon was das wahre Skelett in Bezug auf eine gegebene Punktmenge sein sollte. Die Korrektheit der Methoden kann daher nicht quantitativ beschrieben werden. Aus diesem Grund, können die Ergebnisse nur visuell beurteiltwerden. Weiterhinwerden die Charakteristiken beider Methoden eingehend diskutiert. Es werden Experimentresultate beider Methoden vorgestellt. Die erste Methode bestimmt effizient das Skelett eines Baumes, welches die Aststruktur approximiert. Der Detaillierungsgrad wird hauptsächlich durch den Voxelraum bestimmt, weshalb kleinere Äste nicht angemessen reproduziert werden. Die zweite Methode rekonstruiert Teilskelette eines Baums mit hoher Detailtreue. Die Methode reagiert sensibel auf Rauschen in der Kontur, dennoch sind die Ergebnisse vielversprechend. Es gibt eine Vielzahl von Möglichkeiten die Robustheit der Methode zu verbessern. Die Kombination der Stärken von beiden präsentierten Methoden sollte weiter untersucht werden und kann zu einem robusteren Ansatz führen um vollständige Baumskelette automatisch aus TLS-Daten zu generieren

    Artificial intelligence-based software (AID-FOREST) for tree detection: A new framework for fast and accurate forest inventorying using LiDAR point clouds

    Get PDF
    Forest inventories are essential to accurately estimate different dendrometric and forest stand parameters. However, classical forest inventories are time consuming, slow to conduct, sometimes inaccurate and costly. To address this problem, an efficient alternative approach has been sought and designed that will make this type of field work cheaper, faster, more accurate, and easier to complete. The implementation of this concept has required the development of a specifically designed software called "Artificial Intelligence for Digital Forest (AID-FOREST)", which is able to process point clouds obtained via mobile terrestrial laser scanning (MTLS) and then, to provide an array of multiple useful and accurate dendrometric and forest stand parameters. Singular characteristics of this approach are: No data pre-processing is required either pre-treatment of forest stand; fully automatic process once launched; no limitations by the size of the point cloud file and fast computations.To validate AID-FOREST, results provided by this software were compared against the obtained from in-situ classical forest inventories. To guaranty the soundness and generality of the comparison, different tree spe-cies, plot sizes, and tree densities were measured and analysed. A total of 76 plots (10,887 trees) were selected to conduct both a classic forest inventory reference method and a MTLS (ZEB-HORIZON, Geoslam, ltd.) scanning to obtain point clouds for AID-FOREST processing, known as the MTLS-AIDFOREST method. Thus, we compared the data collected by both methods estimating the average number of trees and diameter at breast height (DBH) for each plot. Moreover, 71 additional individual trees were scanned with MTLS and processed by AID-FOREST and were then felled and divided into logs measuring 1 m in length. This allowed us to accurately measure the DBH, total height, and total volume of the stems.When we compared the results obtained with each methodology, the mean detectability was 97% and ranged from 81.3 to 100%, with a bias (underestimation by MTLS-AIDFOREST method) in the number of trees per plot of 2.8% and a relative root-mean-square error (RMSE) of 9.2%. Species, plot size, and tree density did not significantly affect detectability. However, this parameter was significantly affected by the ecosystem visual complexity index (EVCI). The average DBH per plot was underestimated (but was not significantly different from 0) by the MTLS-AIDFOREST, with the average bias for pooled data being 1.8% with a RMSE of 7.5%. Similarly, there was no statistically significant differences between the two distribution functions of the DBH at the 95.0% confidence level.Regarding the individual tree parameters, MTLS-AIDFOREST underestimated DBH by 0.16 % (RMSE = 5.2 %) and overestimated the stem volume (Vt) by 1.37 % (RMSE = 14.3 %, although the BIAS was not statistically significantly different from 0). However, the MTLS-AIDFOREST method overestimated the total height (Ht) of the trees by a mean 1.33 m (5.1 %; relative RMSE = 11.5 %), because of the different height concepts measured by both methodological approaches. Finally, AID-FOREST required 30 to 66 min per ha-1 to fully automatically process the point cloud data from the *.las file corresponding to a given hectare plot. Thus, applying our MTLS-AIDFOREST methodology to make full forest inventories, required a 57.3 % of the time required to perform classical plot forest inventories (excluding the data postprocessing time in the latter case). A free trial of AID -FOREST can be requested at [email protected]

    DETECÇÃO DE ÁRVORES EM NUVENS DE PONTOS DE VARREDURA LASER TERRESTRE

    Get PDF
    A utilização do laser terrestre para levantamentos em povoamentos florestais tem como objetivo prover dados à modelagem tridimensional das árvores, no entanto, para que seja possível aplicar tal modelo, é necessário realizar a detecção dos pontos que fazem parte de árvores na varredura. O presente estudo propõe um método para a detecção de árvores a partir da nuvem de pontos 3D de plantios florestais. Inicialmente, procura-se reconstituir a distribuição espacial das árvores a partir da aplicação de um algoritmo de segmentação em uma seção transversal (1 metro) da nuvem de pontos. Em seguida, é apresentado um algoritmo para detectar a posição das árvores com base no padrão de alinhamento do povoamento. Por fim, os resultados obtidos são apresentados para validação pelo usuário da nuvem de pontos. O método apresentado foi testado em parcelas circulares instaladas em povoamentos de Eucalyptus spp. levantados por varreduras simples e múltiplas. Os resultados apontaram a necessidade de utilização de múltiplas estações de TLS para redução do efeito de sombreamento no levantamento das parcelas circulares. A aplicação do método de detecção de árvores em conjunto com a análise visual resultou na identificação de 100% das árvores a partir das nuvens de pontos das parcela

    Estimation of canopy structure and individual trees from laser scanning data

    Get PDF
    During the last fifteen years, laser scanning has emerged as a data source for forest inventory. Airborne laser scanning (ALS) provides 3D data, which may be used in an automated analysis chain to estimate vegetation properties for large areas. Terrestrial laser scanning (TLS) data are highly accurate 3D ground-based measurements, which may be used for detailed 3D modeling of vegetation elements. The objective of this thesis is to further develop methods to estimate forest information from laser scanning data. The aims are to estimate lists of individual trees from ALS data with accuracy comparable to area-based methods, to collect detailed field reference data using TLS, and to estimate canopy structure from ALS data. The studies were carried out in boreal and hemi-boreal forests in Sweden. Tree crowns were delineated in three dimensions with a model-based clustering approach. The model-based clustering identified more trees than delineation of a surface model, especially for small trees below the dominant tree layer. However, it also resulted in more erroneously delineated tree crowns. Individual trees were estimated with statistical methods from ALS data based on field-measured trees to obtain unbiased results at area level. The accuracy of the estimates was similar for delineation of a surface model (stem density root mean square error or RMSE 32.0%, bias 1.9%; stem volume RMSE 29.7%, bias 3.8%) as for model-based clustering (stem density RMSE 33.3%, bias 1.1%; stem volume RMSE 22.0%, bias 2.5%). Tree positions and stem diameters were estimated from TLS data with an automated method. Stem attributes were then estimated from ALS data trained with trees found from TLS data. The accuracy (diameter at breast height or DBH RMSE 15.4%; stem volume RMSE 34.0%) was almost the same as when trees from a manual field inventory were used as training data (DBH RMSE 15.1%; stem volume RMSE 34.5%). Canopy structure was estimated from discrete return and waveform ALS data. New models were developed based on the Beer-Lambert law to relate canopy volume to the fraction of laser light reaching the ground. Waveform ALS data (canopy volume RMSE 27.6%) described canopy structure better than discrete return ALS data (canopy volume RMSE 36.5%). The methods may be used to estimate canopy structure for large areas

    VOLUME ESTIMATION OF FUEL LOAD FOR HAZARD REDUCTION BURNING: A VOXEL APPROACH

    Full text link
    The year 2020 started with more than 100 fires burning across Australia. Bushfire is a phenomenon that cannot be mitigated completely by human intervention; however, better management practices can help counter the increasing severity of fires. Hazard Reduction (HR) burning has become one of the resolute applications in the management of fire-prone ecosystems worldwide, where certain vegetation is deliberately burned under controlled circumstances to thin the fuel to reduce the severity of the bushfires. As the climate changes drastically, the severity of fires is predicted to increase in the coming years. Therefore, it becomes increasingly important to investigate automatic approaches to prevent, reduce and monitor the cause and movement of bushfires. Methods of assessing FL levels in Australia are commonly based on visual assessment guidelines, such as those described in the Overall Fuel Hazard Assessment Guide (OFHAG). The overall aim of this research is to investigate the use of LiDAR to estimate the volume of fuel load to assist in the planning of HR burning, an approach that could quantify the accumulation of elevated and near-surface FL with less time and cost. This research focuses on an innovative approach based on a voxel representation. A voxel is a volumetric pixel, a quantum unit of volume, and a numeric value of x, y and z to signify a value on a regular grid in a three-dimensional space. Voxels are beneficial for processing large pointcloud data and, specifically, computing volumes. Pointcloud data provides valuable three-dimensional information by capturing forest structural characteristics. The output of this research is to create a digitised map of the accumulation of fuel (vegetation) points at elevated fuel and near-surface fuel stratum based on the point density of the pointcloud dataset for Vermont Place Park, Newcastle, Australia. The output of this information is relayed through a digital map of fuel accumulation at elevated and near-surface fuel stratum. The result of this research provides a rough idea of where the highest amount of fuel is accumulated to assist in planning of an HR burn. This will help the fire practitioners/land managers determine at which location in the forest profile should be prioritised for HR burning. There is a short window to conduct HR burning that is why it is prevalent that a tool that can provide information on fuel at a fast pace could help the fire practitioner/land managers

    Reconstruction de formes tubulaires à partir de nuages de points : application à l’estimation de la géométrie forestière

    Get PDF
    Les capacités des technologies de télédétection ont augmenté exponentiellement au cours des dernières années : de nouveaux scanners fournissent maintenant une représentation géométrique de leur environnement sous la forme de nuage de points avec une précision jusqu'ici inégalée. Le traitement de nuages de points est donc devenu une discipline à part entière avec ses problématiques propres et de nombreux défis à relever. Le coeur de cette thèse porte sur la modélisation géométrique et introduit une méthode robuste d'extraction de formes tubulaires à partir de nuages de points. Nous avons choisi de tester nos méthodes dans le contexte applicatif difficile de la foresterie pour mettre en valeur la robustesse de nos algorithmes et leur application à des données volumineuses. Nos méthodes intègrent les normales aux points comme information supplémentaire pour atteindre les objectifs de performance nécessaire au traitement de nuages de points volumineux.Cependant, ces normales ne sont généralement pas fournies par les capteurs, il est donc nécessaire de les pré-calculer.Pour préserver la rapidité d'exécution, notre premier développement a donc consisté à présenter une méthode rapide d'estimation de normales. Pour ce faire nous avons approximé localement la géométrie du nuage de points en utilisant des "patchs" lisses dont la taille s'adapte à la complexité locale des nuages de points. Nos travaux se sont ensuite concentrés sur l’extraction robuste de formes tubulaires dans des nuages de points denses, occlus, bruités et de densité inhomogène. Dans cette optique, nous avons développé une variante de la transformée de Hough dont la complexité est réduite grâce aux normales calculées. Nous avons ensuite couplé ces travaux à une proposition de contours actifs indépendants de leur paramétrisation. Cette combinaison assure la cohérence interne des formes reconstruites et s’affranchit ainsi des problèmes liés à l'occlusion, au bruit et aux variations de densité. Notre méthode a été validée en environnement complexe forestier pour reconstruire des troncs d'arbre afin d'en relever les qualités par comparaison à des méthodes existantes. La reconstruction de troncs d'arbre ouvre d'autres questions à mi-chemin entre foresterie et géométrie. La segmentation des arbres d'une placette forestière est l'une d’entre elles. C'est pourquoi nous proposons également une méthode de segmentation conçue pour contourner les défauts des nuages de points forestiers et isoler les différents objets d'un jeu de données. Durant nos travaux nous avons utilisé des approches de modélisation pour répondre à des questions géométriques, et nous les avons appliqué à des problématiques forestières.Il en résulte un pipeline de traitements cohérent qui, bien qu'illustré sur des données forestières, est applicable dans des contextes variés.Abstract : The potential of remote sensing technologies has recently increased exponentially: new sensors now provide a geometric representation of their environment in the form of point clouds with unrivalled accuracy. Point cloud processing hence became a full discipline, including specific problems and many challenges to face. The core of this thesis concerns geometric modelling and introduces a fast and robust method for the extraction of tubular shapes from point clouds. We hence chose to test our method in the difficult applicative context of forestry in order to highlight the robustness of our algorithms and their application to large data sets. Our methods integrate normal vectors as a supplementary geometric information in order to achieve the performance goal necessary for large point cloud processing. However, remote sensing techniques do not commonly provide normal vectors, thus they have to be computed. Our first development hence consisted in the development of a fast normal estimation method on point cloud in order to reduce the computing time on large point clouds. To do so, we locally approximated the point cloud geometry using smooth ''patches`` of points which size adapts to the local complexity of the point cloud geometry. We then focused our work on the robust extraction of tubular shapes from dense, occluded, noisy point clouds suffering from non-homogeneous sampling density. For this objective, we developed a variant of the Hough transform which complexity is reduced thanks to the computed normal vectors. We then combined this research with a new definition of parametrisation-invariant active contours. This combination ensures the internal coherence of the reconstructed shapes and alleviates issues related to occlusion, noise and variation of sampling density. We validated our method in complex forest environments with the reconstruction of tree stems to emphasize its advantages and compare it to existing methods. Tree stem reconstruction also opens new perspectives halfway in between forestry and geometry. One of them is the segmentation of trees from a forest plot. Therefore we also propose a segmentation approach designed to overcome the defects of forest point clouds and capable of isolating objects inside a point cloud. During our work we used modelling approaches to answer geometric questions and we applied our methods to forestry problems. Therefore, our studies result in a processing pipeline adapted to forest point cloud analyses, but the general geometric algorithms we propose can also be applied in various contexts

    Proceedings of the 7th International Conference on Functional-Structural Plant Models, Saariselkä, Finland, 9 - 14 June 2013

    Get PDF

    Reconstructing plant architecture from 3D laser scanner data

    Full text link
    En infographie, les modèles virtuels de plantes sont de plus en plus réalistes visuellement. Cependant, dans le contexte de la biologie et l'agronomie, l'acquisition de modèles précis de plantes réelles reste un problème majeur pour la construction de modèles quantitatifs du développement des plantes. Récemment, des scanners laser 3D permettent d'acquérir des images 3D avec pour chaque pixel une profondeur correspondant à la distance entre le scanner et la surface de l'objet visé. Cependant, une plante est généralement un ensemble important de petites surfaces sur lesquelles les méthodes classiques de reconstruction échouent. Dans cette thèse, nous présentons une méthode pour reconstruire des modèles virtuels de plantes à partir de scans laser. Mesurer des plantes avec un scanner laser produit des données avec différents niveaux de précision. Les scans sont généralement denses sur la surface des branches principales mais recouvrent avec peu de points les branches fines. Le cur de notre méthode est de créer itérativement un squelette de la structure de la plante en fonction de la densité locale de points. Pour cela, une méthode localement adaptative a été développée qui combine une phase de contraction et un algorithme de suivi de points. Nous présentons également une procédure d'évaluation quantitative pour comparer nos reconstructions avec des structures reconstruites par des experts de plantes réelles. Pour cela, nous explorons d'abord l'utilisation d'une distance d'édition entre arborescence. Finalement, nous formalisons la comparaison sous forme d'un problème d'assignation pour trouver le meilleur appariement entre deux structures et quantifier leurs différences. (Résumé d'auteur
    corecore