2 research outputs found

    Tree species classification from airborne hyperspectral and LiDAR data using 3D convolutional neural networks

    Get PDF
    During the last two decades, forest monitoring and inventory systems have moved from field surveys to remote sensing-based methods. These methods tend to focus on economically significant components of forests, thus leaving out many factors vital for forest biodiversity, such as the occurrence of species with low economical but high ecological values. Airborne hyperspectral imagery has shown significant potential for tree species classification, but the most common analysis methods, such as random forest and support vector machines, require manual feature engineering in order to utilize both spatial and spectral features, whereas deep learning methods are able to extract these features from the raw data. Our research focused on the classification of the major tree species Scots pine, Norway spruce and birch, together with an ecologically valuable keystone species, European aspen, which has a sparse and scattered occurrence in boreal forests. We compared the performance of three-dimensional convolutional neural networks (3D-CNNs) with the support vector machine, random forest, gradient boosting machine and artificial neural network in individual tree species classification from hyperspectral data with high spatial and spectral resolution. We collected hyperspectral and LiDAR data along with extensive ground reference data measurements of tree species from the 83 km2 study area located in the southern boreal zone in Finland. A LiDAR-derived canopy height model was used to match ground reference data to aerial imagery. The best performing 3D-CNN, utilizing 4 m image patches, was able to achieve an F1-score of 0.91 for aspen, an overall F1-score of 0.86 and an overall accuracy of 87%, while the lowest performing 3D-CNN utilizing 10 m image patches achieved an F1-score of 0.83 and an accuracy of 85%. In comparison, the support-vector machine achieved an F1-score of 0.82 and an accuracy of 82.4% and the artificial neural network achieved an F1-score of 0.82 and an accuracy of 81.7%. Compared to the reference models, 3D-CNNs were more efficient in distinguishing coniferous species from each other, with a concurrent high accuracy for aspen classification. Deep neural networks, being black box models, hide the information about how they reach their decision. We used both occlusion and saliency maps to interpret our models. Finally, we used the best performing 3D-CNN to produce a wall-to-wall tree species map for the full study area that can later be used as a reference prediction in, for instance, tree species mapping from multispectral satellite images. The improved tree species classification demonstrated by our study can benefit both sustainable forestry and biodiversity conservation.peerReviewe

    CHARACTERIZING FOREST STANDS USING UNMANNED AERIAL SYSTEMS (UAS) DIGITAL PHOTOGRAMMETRY: ADVANCEMENTS AND CHALLENGES IN MONITORING LOCAL SCALE FOREST COMPOSITION, STRUCTURE, AND HEALTH

    Get PDF
    Present-day forests provide a wide variety of ecosystem services to the communities that rely on them. At the same time, these environments face routine and substantial disturbances that direct the need for site-specific, timely, and accurate monitoring/management (i.e., precision forestry). Unmanned Aerial Systems (UAS or UAV) and their associated technologies offer a promising tool for conducting such precision forestry. Now, even with only natural color, uncalibrated, UAS imagery, software workflows involving Structure from Motion (SfM) (i.e., digital photogrammetry) modelling and segmentation can be used to characterize the features of individual trees or forest communities. In this research, we tested the effectiveness of UAS-SfM for mapping local scale forest composition, structure, and health. Our first study showed that digital (automated) methods for classifying forest composition that utilized UAS imagery produced a higher overall accuracy than those involving other high-spatial-resolution imagery (7.44% - 16.04%). The second study demonstrated that natural color sensors could provide a highly efficient estimate of individual tree diameter at breast height (dbh) (± 13.15 cm) as well as forest stand basal area, tree density, and stand density. In the final study, we join a growing number of researchers examining precision applications in forest health monitoring. Here, we demonstrate that UAS, equipped with both natural color and multispectral sensors, are more capable of distinguishing forest health classes than freely available high-resolution airborne imagery. For five health classes, these UAS data produced a 14.93% higher overall accuracy in comparison to the airborne imagery. Together, these three chapters present a wholistic approach to enhancing and enriching precision forest management, which remains a critical requirement for effectively managing diverse forested landscapes
    corecore