3 research outputs found

    lidR : an R package for analysis of Airborne Laser Scanning (ALS) data

    Get PDF
    Airborne laser scanning (ALS) is a remote sensing technology known for its applicability in natural resources management. By quantifying the three-dimensional structure of vegetation and underlying terrain using laser technology, ALS has been used extensively for enhancing geospatial knowledge in the fields of forestry and ecology. Structural descriptions of vegetation provide a means of estimating a range of ecologically pertinent attributes, such as height, volume, and above-ground biomass. The efficient processing of large, often technically complex datasets requires dedicated algorithms and software. The continued promise of ALS as a tool for improving ecological understanding is often dependent on user-created tools, methods, and approaches. Due to the proliferation of ALS among academic, governmental, and private-sector communities, paired with requirements to address a growing demand for open and accessible data, the ALS community is recognising the importance of free and open-source software (FOSS) and the importance of user-defined workflows. Herein, we describe the philosophy behind the development of the lidR package. Implemented in the R environment with a C/C++ backend, lidR is free, open-source and cross-platform software created to enable simple and creative processing workflows for forestry and ecology communities using ALS data. We review current algorithms used by the research community, and in doing so raise awareness of current successes and challenges associated with parameterisation and common implementation approaches. Through a detailed description of the package, we address the key considerations and the design philosophy that enables users to implement user-defined tools. We also discuss algorithm choices that make the package representative of the ‘state-of-the-art' and we highlight some internal limitations through examples of processing time discrepancies. We conclude that the development of applications like lidR are of fundamental importance for developing transparent, flexible and open ALS tools to ensure not only reproducible workflows, but also to offer researchers the creative space required for the progress and development of the discipline

    Probabilistic and Deep Learning Algorithms for the Analysis of Imagery Data

    Get PDF
    Accurate object classification is a challenging problem for various low to high resolution imagery data. This applies to both natural as well as synthetic image datasets. However, each object recognition dataset poses its own distinct set of domain-specific problems. In order to address these issues, we need to devise intelligent learning algorithms which require a deep understanding and careful analysis of the feature space. In this thesis, we introduce three new learning frameworks for the analysis of both airborne images (NAIP dataset) and handwritten digit datasets without and with noise (MNIST and n-MNIST respectively). First, we propose a probabilistic framework for the analysis of the NAIP dataset which includes (1) an unsupervised segmentation module based on the Statistical Region Merging algorithm, (2) a feature extraction module that extracts a set of standard hand-crafted texture features from the images, (3) a supervised classification algorithm based on Feedforward Backpropagation Neural Networks, and (4) a structured prediction framework using Conditional Random Fields that integrates the results of the segmentation and classification modules into a single composite model to generate the final class labels. Next, we introduce two new datasets SAT-4 and SAT-6 sampled from the NAIP imagery and use them to evaluate a multitude of Deep Learning algorithms including Deep Belief Networks (DBN), Convolutional Neural Networks (CNN) and Stacked Autoencoders (SAE) for generating class labels. Finally, we propose a learning framework by integrating hand-crafted texture features with a DBN. A DBN uses an unsupervised pre-training phase to perform initialization of the parameters of a Feedforward Backpropagation Neural Network to a global error basin which can then be improved using a round of supervised fine-tuning using Feedforward Backpropagation Neural Networks. These networks can subsequently be used for classification. In the following discussion, we show that the integration of hand-crafted features with DBN shows significant improvement in performance as compared to traditional DBN models which take raw image pixels as input. We also investigate why this integration proves to be particularly useful for aerial datasets using a statistical analysis based on Distribution Separability Criterion. Then we introduce a new dataset called noisy-MNIST (n-MNIST) by adding (1) additive white gaussian noise (AWGN), (2) motion blur and (3) Reduced contrast and AWGN to the MNIST dataset and present a learning algorithm by combining probabilistic quadtrees and Deep Belief Networks. This dynamic integration of the Deep Belief Network with the probabilistic quadtrees provide significant improvement over traditional DBN models on both the MNIST and the n-MNIST datasets. Finally, we extend our experiments on aerial imagery to the class of general texture images and present a theoretical analysis of Deep Neural Networks applied to texture classification. We derive the size of the feature space of textural features and also derive the Vapnik-Chervonenkis dimension of certain classes of Neural Networks. We also derive some useful results on intrinsic dimension and relative contrast of texture datasets and use these to highlight the differences between texture datasets and general object recognition datasets

    Quantification théorique des effets du paramétrage du système d'acquisition sur les variables descriptives du nuage de points LiDAR

    Get PDF
    La cartographie de la ressource forestière se concrétise par la réalisation d’inventaires sur de vastes territoires grâce à des méthodes de mesure automatiques ou semi-automatiques à grandes échelles. En particulier, le développement du LiDAR (light detection and ranging) aéroporté a ouvert la voie à de nouvelles perspectives. Bien que le LiDAR aéroporté ait fait ses preuves comme outil d’inventaire et de cartographie, l’étude de la littérature scientifique sur le sujet met en évidence que les méthodes de traitement de l’information ont des limites et ne sont généralement valides que dans une région donnée et avec un système d’acquisition donné. En effet, un changement dans le dispositif d’acquisition entraîne des variations dans la structure du nuage de points acquis, rendant lesmodèles de cartographie de la ressource non généralisables. Dans le but de créer des modèles de cartographie de la ressource qui soient moins dépendants de la région d’étude et du dispositif d’acquisition utilisé pour les construire, il est nécessaire de comprendre d’où viennent ces variations et comment, à défaut de les éviter, les corriger. Nous explorons dans cette thèse comment des variations dans la configuration des systèmes d’acquisition de données peuvent engendrer des variations dans la structure des nuages de points. Ces questions sont traitées grâce à des modèles mathématiques théoriques simples et nous montrons, dans une certaine mesure, qu’il est possible de corriger les données de LiDAR aéroporté pour les normaliser afin de simuler une acquisition homogène réalisée avec un dispositif d’acquisition « standard » unique. Cette thèse aborde l’enjeu de proposer et d’initier, pour le futur, des méthodes de traitement de données reposant sur des standards mieux établis afin que les outils de cartographie de la ressource soient plus polyvalents et plus justes à grandes échellesThe mapping of the forest resource is currently achieved through inventories made across large territories using methods of automatic or semi-automatic measurements at broad scales. Notably, the development of airborne LiDAR (light detection and ranging) has opened the way for new perspectives in this context. Despite its proven suitability as a tool for inventories and mapping, the study of the scientific literature on airborne LiDAR shows that methods for processing the acquired information remain limited, and are usually valid only for a given region of interest and for a given acquisition device. Indeed, modifying the acquisition device generates variation in the structure of the point cloud that often restrict the range of application of resource evaluation models. With the aim of moving towards models for resourcemapping that are less dependent on the characteristics of both the study area and the of acquisition device, it is important to understand the source of such variation and how to correct it. We investigated, how variations in the settings of the data acquisition systems may generate some variation in the structure of the obtained point clouds. These questions were treated using simple theoretical and mathematical models and we showed, to a certain extent, that it is possible to correct the LiDAR data, and thus to normalise measurements to simulate homogeneous acquisitions with a “standard” and unique acquisition device. The challenge pursued in this thesis is to propose and initiate, for the future, data processing methods relying on better established standards in order to build more accurate and more versatile tools for the large-scalemapping of forest resources
    corecore