444 research outputs found

    A dual network for super-resolution and semantic segmentation of sentinel-2 imagery

    Get PDF
    There is a growing interest in the development of automated data processing workflows that provide reliable, high spatial resolution land cover maps. However, high-resolution remote sensing images are not always affordable. Taking into account the free availability of Sentinel-2 satellite data, in this work we propose a deep learning model to generate high-resolution segmentation maps from low-resolution inputs in a multi-task approach. Our proposal is a dual-network model with two branches: the Single Image Super-Resolution branch, that reconstructs a high-resolution version of the input image, and the Semantic Segmentation Super-Resolution branch, that predicts a high-resolution segmentation map with a scaling factor of 2. We performed several experiments to find the best architecture, training and testing on a subset of the S2GLC 2017 dataset. We based our model on the DeepLabV3+ architecture, enhancing the model and achieving an improvement of 5% on IoU and almost 10% on the recall score. Furthermore, our qualitative results demonstrate the effectiveness and usefulness of the proposed approach.This work has been supported by the Spanish Research Agency (AEI) under project PID2020-117142GB-I00 of the call MCIN/AEI/10.13039/501100011033. L.S. would like to acknowledge the BECAL (Becas Carlos Antonio López) scholarship for the financial support.Peer ReviewedPostprint (published version

    Land cover and forest segmentation using deep neural networks

    Get PDF
    Tiivistelmä. Land Use and Land Cover (LULC) information is important for a variety of applications notably ones related to forestry. The segmentation of remotely sensed images has attracted various research subjects. However this is no easy task, with various challenges to face including the complexity of satellite images, the difficulty to get hold of them, and lack of ready datasets. It has become clear that trying to classify on multiple classes requires more elaborate methods such as Deep Learning (DL). Deep Neural Networks (DNNs) have a promising potential to be a good candidate for the task. However DNNs require a huge amount of data to train including the Ground Truth (GT) data. In this thesis a DL pixel-based approach backed by the state of the art semantic segmentation methods is followed to tackle the problem of LULC mapping. The DNN used is based on DeepLabv3 network with an encoder-decoder architecture. To tackle the issue of lack of data the Sentinel-2 satellite whose data is provided for free by Copernicus was used with the GT mapping from Corine Land Cover (CLC) provided by Copernicus and modified by Tyke to a higher resolution. From the multispectral images in Sentinel-2 Red Green Blue (RGB), and Near Infra Red (NIR) channels were extracted, the 4th channel being extremely useful in the detection of vegetation. This ended up achieving quite good accuracy on a DNN based on ResNet-50 which was calculated using the Mean Intersection over Union (MIoU) metric reaching 0.53MIoU. It was possible to use this data to transfer the learning to a data from Pleiades-1 satellite with much better resolution, Very High Resolution (VHR) in fact. The results were excellent especially when compared on training right away on that data reaching an accuracy of 0.98 and 0.85MIoU

    Deep learning for semantic segmentation of remote sensing imaging

    Get PDF
    At this present time, thousands of satellites orbiting our planet are constantly gathering information. Key towards the automated processing and analysis of this remotely sensed data, deep learning techniques are starting to settle as the main methodology of choice to tackle this task. The present study attempts to provide an end-to-end deep learning algorithm that solves the remote sensing task of land cover land use classification, which is used in numerous applications like monitoring of environmental changes or urban planning. Specifically, we propose to use DeeplabV3+ architecture to semantically segment images captured by the satellite Sentinel-2 producing meaningful land cover land use mappings.Hoy en día miles de satélites orbitan nuestro planeta y capturan información de forma constate. Uno de los factores clave para procesar y analizar este enorme volumen de datos es la tecnología del aprendizaje profundo, que cada vez se posiciona con más fuerza en el campo de la teledetección. Este estudio quiere proporcionar un algoritmo basado en aprendizaje profundo para producir clasificaciones de la cobertura y utilización de la superficie terrestre a partir de imágenes tele detectadas. Este proceso se utiliza en aplicaciones como el monitoraje del medio terrestre o la planificación urbana. Concretamente, proponemos utilizar el modelo DeeplabV3+ para segmentar semánticamente imágenes capturadas por el satélite Sentinel-2 y producir mapas de cobertura y utilización del suelo terrestre.Avui en dia, milers de satèl·lits orbiten el nostre planeta i capturen informació de manera constant. Un dels factors clau per tal de processar i analitzar tot aquest volum de dades és la tecnologia de l'aprenentatge profund, que cada vegada està agafant un pes més important en el camp de la teledetecció. Aquest estudi vol proporcionar un algorisme basat en aprenentatge profund que produeixi classificacions sobre la cobertura i utilització del sol terrestre a partir d'imatges de teledetecció. Aquesta tasca s'utilitza en aplicacions com el monitoratge del medi terrestre o la planificació urbana. Concretament, plantegem la utilització del model DeeplabV3+ per segmentar semànticament les imatges capturades pel satèl·lit Sentinel-2 i produir mapes de cobertura i utilització de la superfície terrestre

    SEN12MS -- A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion

    Get PDF
    The availability of curated large-scale training data is a crucial factor for the development of well-generalizing deep learning methods for the extraction of geoinformation from multi-sensor remote sensing imagery. While quite some datasets have already been published by the community, most of them suffer from rather strong limitations, e.g. regarding spatial coverage, diversity or simply number of available samples. Exploiting the freely available data acquired by the Sentinel satellites of the Copernicus program implemented by the European Space Agency, as well as the cloud computing facilities of Google Earth Engine, we provide a dataset consisting of 180,662 triplets of dual-pol synthetic aperture radar (SAR) image patches, multi-spectral Sentinel-2 image patches, and MODIS land cover maps. With all patches being fully georeferenced at a 10 m ground sampling distance and covering all inhabited continents during all meteorological seasons, we expect the dataset to support the community in developing sophisticated deep learning-based approaches for common tasks such as scene classification or semantic segmentation for land cover mapping.Comment: accepted for publication in the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (online from September 2019
    corecore