8 research outputs found

    Deep-learning Versus OBIA for Scattered Shrub Detection with Google Earth Imagery: Ziziphus lotus as Case Study

    Get PDF
    There is a growing demand for accurate high-resolution land cover maps in many fields, e.g., in land-use planning and biodiversity conservation. Developing such maps has been traditionally performed using Object-Based Image Analysis (OBIA) methods, which usually reach good accuracies, but require a high human supervision and the best configuration for one image often cannot be extrapolated to a different image. Recently, deep learning Convolutional Neural Networks (CNNs) have shown outstanding results in object recognition in computer vision and are offering promising results in land cover mapping. This paper analyzes the potential of CNN-based methods for detection of plant species of conservation concern using free high-resolution Google Earth TM images and provides an objective comparison with the state-of-the-art OBIA-methods. We consider as case study the detection of Ziziphus lotus shrubs, which are protected as a priority habitat under the European Union Habitats Directive. Compared to the best performing OBIA-method, the best CNN-detector achieved up to 12% better precision, up to 30% better recall and up to 20% better balance between precision and recall. Besides, the knowledge that CNNs acquired in the first image can be re-utilized in other regions, which makes the detection process very fast. A natural conclusion of this work is that including CNN-models as classifiers, e.g., ResNet-classifier, could further improve OBIA methods. The provided methodology can be systematically reproduced for other species detection using our codes available through (https://github.com/EGuirado/CNN-remotesensing).Siham Tabik was supported by the Ramón y Cajal Programme (RYC-2015-18136).The work was partially supported by the Spanish Ministry of Science and Technology under the projects: TIN2014-57251-P, CGL2014-61610-EXP, CGL2010-22314 and grant JC2015-00316, and ERDF and Andalusian Government under the projects: GLOCHARID, RNM-7033, P09-RNM-5048 and P11-TIC-7765.This research was also developed as part of project ECOPOTENTIAL, which received funding from the European Union Horizon 2020 Research and Innovation Programme under grant agreement No. 641762, and by the European LIFE Project ADAPTAMED LIFE14 CCA/ES/000612

    Elevation change of Bhasan Char measured by persistent scatterer interferometry using Sentinel-1 data in a humanitarian context

    Get PDF
    This study investigates the elevation changes for the island of Bhasan Char, located in the Bay of Bengal, which was selected for the relocation of around 100,000 refugees of the Rohingya minority which were forced to leave their homes in Myanmar. Eighty-nine Sentinel-1 products were analysed using persistent scatterer interferometry (PSI) beginning August 2016 through September 2019, divided into three periods of one year to reduce the impact of temporal decorrelation. The findings indicate that the island is a recent landform which underlies naturally induced surface changes with velocities of up to ±20 mm per year. Additional displacement is probably caused by heavy construction loads since early 2018, although we found no statistical evidence for this. The main built-up area shows stable behaviour during the analysed period, but there are significant changes along the coasts and artificial embankments of the island, and within one separate settlement in the north. The moist surface conditions and strong monsoonal rains complicated the proper retrieval of stable trends, but the sum of findings supports the assumption that the island underlies strong morphologic dynamics which put the people to be relocated at additional risk. Its suitability for construction has to be investigated in further studies

    History Behind Rohingya Influx in Bangladesh and Application of Remote Sensing in Monitoring Land Use Change in Refugee Driven Area

    Full text link

    3D point cloud semantic augmentation: Instance segmentation of 360â—¦ panoramas by deep learning techniques

    Full text link
    Semantic augmentation of 3D point clouds is a challenging problem with numerous real-world applications. While deep learning has revolutionised image segmentation and classification, its impact on point cloud is an active research field. In this paper, we propose an instance segmentation and augmentation of 3D point clouds using deep learning architectures. We show the potential of an indirect approach using 2D images and a Mask R-CNN (Region-Based Convolution Neural Network). Our method consists of four core steps. We first project the point cloud onto panoramic 2D images using three types of projections: spherical, cylindrical, and cubic. Next, we homogenise the resulting images to correct the artefacts and the empty pixels to be comparable to images available in common training libraries. These images are then used as input to the Mask R-CNN neural network, designed for 2D instance segmentation. Finally, the obtained predictions are reprojected to the point cloud to obtain the segmentation results. We link the results to a context-aware neural network to augment the semantics. Several tests were performed on different datasets to test the adequacy of the method and its potential for generalisation. The developed algorithm uses only the attributes X, Y, Z, and a projection centre (virtual camera) position as inputs
    corecore