137 research outputs found

    Convolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery

    Get PDF
    Unmanned Aerial Vehicles (UAV) greatly extended our possibilities to acquire high resolution remote sensing data for assessing the spatial distribution of species composition and vegetation characteristics. Yet, current pixel‐ or texture‐based mapping approaches do not fully exploit the information content provided by the high spatial resolution. Here, to fully harness this spatial detail, we apply deep learning techniques, that is, Convolutional Neural Networks (CNNs), on regular tiles of UAV‐orthoimagery (here 2–5 m) to identify the cover of target plant species and plant communities. The approach was tested with UAV‐based orthomosaics and photogrammetric 3D information in three case studies, that is, (1) mapping tree species cover in primary forests, (2) mapping plant invasions by woody species into forests and open land and (3) mapping vegetation succession in a glacier foreland. All three case studies resulted in high predictive accuracies. The accuracy increased with increasing tile size (2–5 m) reflecting the increased spatial context captured by a tile. The inclusion of 3D information derived from the photogrammetric workflow did not significantly improve the models. We conclude that CNN are powerful in harnessing high resolution data acquired from UAV to map vegetation patterns. The study was based on low cost red, green, blue (RGB) sensors making the method accessible to a wide range of users. Combining UAV and CNN will provide tremendous opportunities for ecological applications

    Land Use and Land Cover Classification Using Deep Learning Techniques

    Get PDF
    abstract: Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a potential to locate several types of features; for example, forests, parking lots, airports, residential areas, or freeways in the imagery. However, the appearances of these things vary based on many things including the time that the image is captured, the sensor settings, processing done to rectify the image, and the geographical and cultural context of the region captured by the image. This thesis explores the use of deep convolutional neural networks to classify land use from very high spatial resolution (VHR), orthorectified, visible band multispectral imagery. Recent technological and commercial applications have driven the collection a massive amount of VHR images in the visible red, green, blue (RGB) spectral bands, this work explores the potential for deep learning algorithms to exploit this imagery for automatic land use/ land cover (LULC) classification. The benefits of automatic visible band VHR LULC classifications may include applications such as automatic change detection or mapping. Recent work has shown the potential of Deep Learning approaches for land use classification; however, this thesis improves on the state-of-the-art by applying additional dataset augmenting approaches that are well suited for geospatial data. Furthermore, the generalizability of the classifiers is tested by extensively evaluating the classifiers on unseen datasets and we present the accuracy levels of the classifier in order to show that the results actually generalize beyond the small benchmarks used in training. Deep networks have many parameters, and therefore they are often built with very large sets of labeled data. Suitably large datasets for LULC are not easy to come by, but techniques such as refinement learning allow networks trained for one task to be retrained to perform another recognition task. Contributions of this thesis include demonstrating that deep networks trained for image recognition in one task (ImageNet) can be efficiently transferred to remote sensing applications and perform as well or better than manually crafted classifiers without requiring massive training data sets. This is demonstrated on the UC Merced dataset, where 96% mean accuracy is achieved using a CNN (Convolutional Neural Network) and 5-fold cross validation. These results are further tested on unrelated VHR images at the same resolution as the training set.Dissertation/ThesisMasters Thesis Computer Science 201

    Spatially autocorrelated training and validation samples inflate performance assessment of convolutional neural networks

    Get PDF
    Deep learning and particularly Convolutional Neural Networks (CNN) in concert with remote sensing are becoming standard analytical tools in the geosciences. A series of studies has presented the seemingly outstanding performance of CNN for predictive modelling. However, the predictive performance of such models is commonly estimated using random cross-validation, which does not account for spatial autocorrelation between training and validation data. Independent of the analytical method, such spatial dependence will inevitably inflate the estimated model performance. This problem is ignored in most CNN-related studies and suggests a flaw in their validation procedure. Here, we demonstrate how neglecting spatial autocorrelation during cross-validation leads to an optimistic model performance assessment, using the example of a tree species segmentation problem in multiple, spatially distributed drone image acquisitions. We evaluated CNN-based predictions with test data sampled from 1) randomly sampled hold-outs and 2) spatially blocked hold-outs. Assuming that a block cross-validation provides a realistic model performance, a validation with randomly sampled holdouts overestimated the model performance by up to 28%. Smaller training sample size increased this optimism. Spatial autocorrelation among observations was significantly higher within than between different remote sensing acquisitions. Thus, model performance should be tested with spatial cross-validation strategies and multiple independent remote sensing acquisitions. Otherwise, the estimated performance of any geospatial deep learning method is likely to be overestimated

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin
    • 

    corecore