147 research outputs found

    Mask R-CNN and OBIA Fusion Improves the Segmentation of Scattered Vegetation in Very High-Resolution Optical Sensors

    Get PDF
    This research was funded by the European Research Council (ERC Grant agreement 647038 [BIODESERT]), the European LIFE Project ADAPTAMED LIFE14 CCA/ES/000612, the RH2OARID (P18-RT-5130) and RESISTE (P18-RT-1927) funded by Consejeria de Economia, Conocimiento, Empresas y Universidad from the Junta de Andalucia, and by projects A-TIC-458-UGR18 and DETECTOR (A-RNM-256-UGR18), with the contribution of the European Union Funds for Regional Development. E.R-C was supported by the HIPATIA-UAL fellowship, founded by the University of Almeria. S.T. is supported by the Ramon y Cajal Program of the Spanish Government (RYC-201518136).Vegetation generally appears scattered in drylands. Its structure, composition and spatial patterns are key controls of biotic interactions, water, and nutrient cycles. Applying segmentation methods to very high-resolution images for monitoring changes in vegetation cover can provide relevant information for dryland conservation ecology. For this reason, improving segmentation methods and understanding the effect of spatial resolution on segmentation results is key to improve dryland vegetation monitoring. We explored and analyzed the accuracy of Object-Based Image Analysis (OBIA) and Mask Region-based Convolutional Neural Networks (Mask R-CNN) and the fusion of both methods in the segmentation of scattered vegetation in a dryland ecosystem. As a case study, we mapped Ziziphus lotus, the dominant shrub of a habitat of conservation priority in one of the driest areas of Europe. Our results show for the first time that the fusion of the results from OBIA and Mask R-CNN increases the accuracy of the segmentation of scattered shrubs up to 25% compared to both methods separately. Hence, by fusing OBIA and Mask R-CNNs on very high-resolution images, the improved segmentation accuracy of vegetation mapping would lead to more precise and sensitive monitoring of changes in biodiversity and ecosystem services in drylands.European Research Council (ERC) 647038European LIFE Project ADAPTAMED LIFE14 CCA/ES/000612Junta de Andalucia P18-RT-1927 P18-RT-5130DETECTOR A-RNM-256-UGR18European Union Funds for Regional DevelopmentHIPATIA-UAL fellowshipSpanish Government RYC-201518136A-TIC-458-UGR1

    Deep-learning Versus OBIA for Scattered Shrub Detection with Google Earth Imagery: Ziziphus lotus as Case Study

    Get PDF
    There is a growing demand for accurate high-resolution land cover maps in many fields, e.g., in land-use planning and biodiversity conservation. Developing such maps has been traditionally performed using Object-Based Image Analysis (OBIA) methods, which usually reach good accuracies, but require a high human supervision and the best configuration for one image often cannot be extrapolated to a different image. Recently, deep learning Convolutional Neural Networks (CNNs) have shown outstanding results in object recognition in computer vision and are offering promising results in land cover mapping. This paper analyzes the potential of CNN-based methods for detection of plant species of conservation concern using free high-resolution Google Earth TM images and provides an objective comparison with the state-of-the-art OBIA-methods. We consider as case study the detection of Ziziphus lotus shrubs, which are protected as a priority habitat under the European Union Habitats Directive. Compared to the best performing OBIA-method, the best CNN-detector achieved up to 12% better precision, up to 30% better recall and up to 20% better balance between precision and recall. Besides, the knowledge that CNNs acquired in the first image can be re-utilized in other regions, which makes the detection process very fast. A natural conclusion of this work is that including CNN-models as classifiers, e.g., ResNet-classifier, could further improve OBIA methods. The provided methodology can be systematically reproduced for other species detection using our codes available through (https://github.com/EGuirado/CNN-remotesensing).Siham Tabik was supported by the Ramón y Cajal Programme (RYC-2015-18136).The work was partially supported by the Spanish Ministry of Science and Technology under the projects: TIN2014-57251-P, CGL2014-61610-EXP, CGL2010-22314 and grant JC2015-00316, and ERDF and Andalusian Government under the projects: GLOCHARID, RNM-7033, P09-RNM-5048 and P11-TIC-7765.This research was also developed as part of project ECOPOTENTIAL, which received funding from the European Union Horizon 2020 Research and Innovation Programme under grant agreement No. 641762, and by the European LIFE Project ADAPTAMED LIFE14 CCA/ES/000612

    Hierarchical mapping of Brazilian Savanna (Cerrado) physiognomies based on deep learning

    Get PDF
    The Brazilian Savanna, also known as Cerrado, is considered a global hotspot for biodiversity conservation. The detailed mapping of vegetation types, called physiognomies, is still a challenge due to their high spectral similarity and spatial variability. There are three major ecosystem groups (forest, savanna, and grassland), which can be hierarchically subdivided into 25 detailed physiognomies, according to a well-known classification system. We used an adapted U-net architecture to process a WorldView-2 image with 2-m spatial resolution to hierarchically classify the physiognomies of a Cerrado protected area based on deep learning techniques. Several spectral channels were tested as input datasets to classify the three major ecosystem groups (first level of classification). The dataset composed of RGB bands plus 2-band enhanced vegetation index (EVI2) achieved the best performance and was used to perform the hierarchical classification. In the first level of classification, the overall accuracy was 92.8%. On the other hand, for the savanna and grassland detailed physiognomies (second level of classification), 86.1% and 85.0% were reached, respectively. As the first work that intended to classify Cerrado physiognomies in this level of detail using deep learning, our accuracy rates outperformed others that applied traditional machine learning algorithms for this task

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing

    Rapid Mapping of Landslides in the Western Ghats (India) Triggered by 2018 Extreme Monsoon Rainfall Using a Deep Learning Approach

    Get PDF
    Rainfall-induced landslide inventories can be compiled using remote sensing and topographical data, gathered using either traditional or semi-automatic supervised methods. In this study, we used the PlanetScope imagery and deep learning convolution neural networks (CNNs) to map the 2018 rainfall-induced landslides in the Kodagu district of Karnataka state in theWestern Ghats of India.We used a fourfold cross-validation (CV) to select the training and testing data to remove any random results of the model. Topographic slope data was used as auxiliary information to increase the performance of the model. The resulting landslide inventory map, created using the slope data with the spectral information, reduces the false positives, which helps to distinguish the landslide areas from other similar features such as barren lands and riverbeds. However, while including the slope data did not increase the true positives, the overall accuracy was higher compared to using only spectral information to train the model. The mean accuracies of correctly classified landslide values were 65.5% when using only optical data, which increased to 78% with the use of slope data. The methodology presented in this research can be applied in other landslide-prone regions, and the results can be used to support hazard mitigation in landslide-prone regions

    Semantic segmentation of Brazilian Savanna vegetation using high spatial resolution satellite data and U-net

    Get PDF
    Large-scale mapping of the Brazilian Savanna (Cerrado) vegetation using remote sensing images is still a challenge due to the high spatial variability and spectral similarity of the different characteristic vegetation types (physiognomies). In this paper, we report on semantic segmentation of the three major groups of physiognomies in the Cerrado biome (Grasslands, Savannas and Forests) using a fully convolutional neural network approach. The study area, which covers a Brazilian conservation unit, was divided into three regions to enable testing the approach in regions that were not used in the training phase. A WorldView-2 image was used in cross validation experiments, in which the average overall accuracy achieved with the pixel-wise classifications was 87.0%. The F-1 score values obtained with the approach for the classes Grassland, Savanna and Forest were of 0.81, 0.90 and 0.88, respectively. Visual assessment of the semantic segmentation outcomes was also performed and confirmed the quality of the results. It was observed that the confusion among classes occurs mainly in transition areas, where there are adjacent physiognomies if a scale of increasing density is considered, which agrees with previous studies on natural vegetation mapping for the Cerrado biome. © Authors 2020. All rights reserved

    Cybergis-enabled remote sensing data analytics for deep learning of landscape patterns and dynamics

    Get PDF
    Mapping landscape patterns and dynamics is essential to various scientific domains and many practical applications. The availability of large-scale and high-resolution light detection and ranging (LiDAR) remote sensing data provides tremendous opportunities to unveil complex landscape patterns and better understand landscape dynamics from a 3D perspective. LiDAR data have been applied to diverse remote sensing applications where large-scale landscape mapping is among the most important topics. While researchers have used LiDAR for understanding landscape patterns and dynamics in many fields, to fully reap the benefits and potential of LiDAR is increasingly dependent on advanced cyberGIS and deep learning approaches. In this context, the central goal of this dissertation is to develop a suite of innovative cyberGIS-enabled deep-learning frameworks for combining LiDAR and optical remote sensing data to analyze landscape patterns and dynamics with four interrelated studies. The first study demonstrates a high-accuracy land-cover mapping method by integrating 3D information from LiDAR with multi-temporal remote sensing data using a 3D deep-learning model. The second study combines a point-based classification algorithm and an object-oriented change detection strategy for urban building change detection using deep learning. The third study develops a deep learning model for accurate hydrological streamline detection using LiDAR, which has paved a new way of harnessing LiDAR data to map landscape patterns and dynamics at unprecedented computational and spatiotemporal scales. The fourth study resolves computational challenges in handling remote sensing big data and deep learning of landscape feature extraction and classification through a cutting-edge cyberGIS approach
    corecore