2,047 research outputs found

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Tree species classification from airborne hyperspectral and LiDAR data using 3D convolutional neural networks

    Get PDF
    During the last two decades, forest monitoring and inventory systems have moved from field surveys to remote sensing-based methods. These methods tend to focus on economically significant components of forests, thus leaving out many factors vital for forest biodiversity, such as the occurrence of species with low economical but high ecological values. Airborne hyperspectral imagery has shown significant potential for tree species classification, but the most common analysis methods, such as random forest and support vector machines, require manual feature engineering in order to utilize both spatial and spectral features, whereas deep learning methods are able to extract these features from the raw data. Our research focused on the classification of the major tree species Scots pine, Norway spruce and birch, together with an ecologically valuable keystone species, European aspen, which has a sparse and scattered occurrence in boreal forests. We compared the performance of three-dimensional convolutional neural networks (3D-CNNs) with the support vector machine, random forest, gradient boosting machine and artificial neural network in individual tree species classification from hyperspectral data with high spatial and spectral resolution. We collected hyperspectral and LiDAR data along with extensive ground reference data measurements of tree species from the 83 km2 study area located in the southern boreal zone in Finland. A LiDAR-derived canopy height model was used to match ground reference data to aerial imagery. The best performing 3D-CNN, utilizing 4 m image patches, was able to achieve an F1-score of 0.91 for aspen, an overall F1-score of 0.86 and an overall accuracy of 87%, while the lowest performing 3D-CNN utilizing 10 m image patches achieved an F1-score of 0.83 and an accuracy of 85%. In comparison, the support-vector machine achieved an F1-score of 0.82 and an accuracy of 82.4% and the artificial neural network achieved an F1-score of 0.82 and an accuracy of 81.7%. Compared to the reference models, 3D-CNNs were more efficient in distinguishing coniferous species from each other, with a concurrent high accuracy for aspen classification. Deep neural networks, being black box models, hide the information about how they reach their decision. We used both occlusion and saliency maps to interpret our models. Finally, we used the best performing 3D-CNN to produce a wall-to-wall tree species map for the full study area that can later be used as a reference prediction in, for instance, tree species mapping from multispectral satellite images. The improved tree species classification demonstrated by our study can benefit both sustainable forestry and biodiversity conservation.peerReviewe

    Tree Species Classification of Drone Hyperspectral and RGB Imagery with Deep Learning Convolutional Neural Networks

    Get PDF
    Interest in drone solutions in forestry applications is growing. Using drones, datasets can be captured flexibly and at high spatial and temporal resolutions when needed. In forestry applications, fundamental tasks include the detection of individual trees, tree species classification, biomass estimation, etc. Deep neural networks (DNN) have shown superior results when comparing with conventional machine learning methods such as multi-layer perceptron (MLP) in cases of huge input data. The objective of this research is to investigate 3D convolutional neural networks (3D-CNN) to classify three major tree species in a boreal forest: pine, spruce, and birch. The proposed 3D-CNN models were employed to classify tree species in a test site in Finland. The classifiers were trained with a dataset of 3039 manually labelled trees. Then the accuracies were assessed by employing independent datasets of 803 records. To find the most efficient set of feature combination, we compare the performances of 3D-CNN models trained with hyperspectral (HS) channels, Red-Green-Blue (RGB) channels, and canopy height model (CHM), separately and combined. It is demonstrated that the proposed 3D-CNN model with RGB and HS layers produces the highest classification accuracy. The producer accuracy of the best 3D-CNN classifier on the test dataset were 99.6%, 94.8%, and 97.4% for pines, spruces, and birches, respectively. The best 3D-CNN classifier produced ~5% better classification accuracy than the MLP with all layers. Our results suggest that the proposed method provides excellent classification results with acceptable performance metrics for HS datasets. Our results show that pine class was detectable in most layers. Spruce was most detectable in RGB data, while birch was most detectable in the HS layers. Furthermore, the RGB datasets provide acceptable results for many low-accuracy applications.Peer reviewe

    Deep Learning for Detecting Trees in the Urban Environment from LIDAR

    Get PDF
    Cataloguing and classifying trees in the urban environment is a crucial step in urban and environmental planning. However, manual collection and maintenance of this data is expensive and time-consuming. Algorithmic approaches that rely on remote sensing data have been developed for tree detection in forests, though they generally struggle in the more varied urban environment. This work proposes a novel method for the detection of trees in the urban environment that applies deep learning to remote sensing data. Specifically, we train a PointNet-based neural network to predict tree locations directly from LIDAR data augmented with multi-spectral imaging. We compare this model to numerous high-performant baselines on a large and varied dataset in the Southern California region. We find that our best model outperforms all baselines with a 75.5\% F-score and 2.28 meter RMSE, while being highly efficient. We then analyze and compare the sources of errors, and how these reveal the strengths and weaknesses of each approach

    Convolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery

    Get PDF
    Unmanned Aerial Vehicles (UAV) greatly extended our possibilities to acquire high resolution remote sensing data for assessing the spatial distribution of species composition and vegetation characteristics. Yet, current pixel‐ or texture‐based mapping approaches do not fully exploit the information content provided by the high spatial resolution. Here, to fully harness this spatial detail, we apply deep learning techniques, that is, Convolutional Neural Networks (CNNs), on regular tiles of UAV‐orthoimagery (here 2–5 m) to identify the cover of target plant species and plant communities. The approach was tested with UAV‐based orthomosaics and photogrammetric 3D information in three case studies, that is, (1) mapping tree species cover in primary forests, (2) mapping plant invasions by woody species into forests and open land and (3) mapping vegetation succession in a glacier foreland. All three case studies resulted in high predictive accuracies. The accuracy increased with increasing tile size (2–5 m) reflecting the increased spatial context captured by a tile. The inclusion of 3D information derived from the photogrammetric workflow did not significantly improve the models. We conclude that CNN are powerful in harnessing high resolution data acquired from UAV to map vegetation patterns. The study was based on low cost red, green, blue (RGB) sensors making the method accessible to a wide range of users. Combining UAV and CNN will provide tremendous opportunities for ecological applications

    Applicability of UAV-based optical imagery and classification algorithms for detecting pine wilt disease at different infection stages

    Get PDF
    As a quarantine disease with a rapid spread tendency in the context of climate change, accurate detection and location of pine wilt disease (PWD) at different infection stages is critical for maintaining forest health and being highly productivity. In recent years, unmanned aerial vehicle (UAV)-based optical remote-sensing images have provided new instruments for timely and accurate PWD monitoring. Numerous corresponding analysis algorithms have been proposed for UAV-based image classification, but their applicability of detecting different PWD infection stages has not yet been evaluated under a uniform conditions and criteria. This research aims to systematically assess the performance of multi-source images for detecting different PWD infection stages, analyze effective classification algorithms, and further analyze the validity of thermal images for early detection of PWD. In this study, PWD infection was divided into four stages: healthy, chlorosis, red and gray, and UAV-based hyperspectral (HSI), multispectral (MSI), and MSI with a thermal band (MSI&TIR) datasets were used as the data sources. Spectral analysis, support vector machine (SVM), random forest (RF), two- and three-dimensional convolutional network (2D- and 3D-CNN) algorithms were applied to these datasets to compare their classification abilities. The results were as follows: (I) The classification accuracy of the healthy, red, and gray stages using the MSI dataset was close to that obtained when using the MSI&TIR dataset with the same algorithms, whereas the HSI dataset displayed no obvious advantages. (II) The RF and 3D-CNN algorithms were the most accurate for all datasets (RF: overall accuracy = 94.26%, 3D-CNN: overall accuracy = 93.31%), while the spectral analysis method is also valid for the MSI&TIR dataset. (III) Thermal band displayed significant potential in detection of the chlorosis stage, and the MSI&TIR dataset displayed the best performance for detection of all infection stages. Considering this, we suggest that the MSI&TIR dataset can essentially satisfy PWD identification requirements at various stages, and the RF algorithm provides the best choice, especially in actual forest investigations. In addition, the performance of thermal imaging in the early monitoring of PWD is worthy of further investigation. These findings are expected to provide insight into future research and actual surveys regarding the selection of both remote sensing datasets and data analysis algorithms for detection requirements of different PWD infection stages to detect the disease earlier and prevent losses

    HyperSeed: An End-to-End Method to Process Hyperspectral Images of Seeds

    Get PDF
    High-throughput, nondestructive, and precise measurement of seeds is critical for the evaluation of seed quality and the improvement of agricultural productions. To this end, we have developed a novel end-to-end platform named HyperSeed to provide hyperspectral information for seeds. As a test case, the hyperspectral images of rice seeds are obtained from a high-performance line-scan image spectrograph covering the spectral range from 600 to 1700 nm. The acquired images are processed via a graphical user interface (GUI)-based open-source software for background removal and seed segmentation. The output is generated in the form of a hyperspectral cube and curve for each seed. In our experiment, we presented the visual results of seed segmentation on different seed species. Moreover, we conducted a classification of seeds raised in heat stress and control environments using both traditional machine learning models and neural network models. The results show that the proposed 3D convolutional neural network (3D CNN) model has the highest accuracy, which is 97.5% in seed-based classification and 94.21% in pixel-based classification, compared to 80.0% in seed-based classification and 85.67% in seed-based classification from the support vector machine (SVM) model. Moreover, our pipeline enables systematic analysis of spectral curves and identification of wavelengths of biological interest
    • 

    corecore