514 research outputs found

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Towards a 20m global building map from Sentinel-1 SAR Data

    Get PDF
    This study introduces a technique for automatically mapping built-up areas using synthetic aperture radar (SAR) backscattering intensity and interferometric multi-temporal coherence generated from Sentinel-1 data in the framework of the Copernicus program. The underlying hypothesis is that, in SAR images, built-up areas exhibit very high backscattering values that are coherent in time. Several particular characteristics of the Sentinel-1 satellite mission are put to good use, such as its high revisit time, the availability of dual-polarized data, and its small orbital tube. The newly developed algorithm is based on an adaptive parametric thresholding that first identifies pixels with high backscattering values in both VV and VH polarimetric channels. The interferometric SAR coherence is then used to reduce false alarms. These are caused by land cover classes (other than buildings) that are characterized by high backscattering values that are not coherent in time (e.g., certain types of vegetated areas). The algorithm was tested on Sentinel-1 Interferometric Wide Swath data from five different test sites located in semiarid and arid regions in the Mediterranean region and Northern Africa. The resulting building maps were compared with the Global Urban Footprint (GUF) derived from the TerraSAR-X mission data and, on average, a 92% agreement was obtained.Peer ReviewedPostprint (published version

    Semi-supervised Convolutional Neural Networks for Flood Mapping using Multi-modal Remote Sensing Data

    Full text link
    When floods hit populated areas, quick detection of flooded areas is crucial for initial response by local government, residents, and volunteers. Space-borne polarimetric synthetic aperture radar (PolSAR) is an authoritative data sources for flood mapping since it can be acquired immediately after a disaster even at night time or cloudy weather. Conventionally, a lot of domain-specific heuristic knowledge has been applied for PolSAR flood mapping, but their performance still suffers from confusing pixels caused by irregular reflections of radar waves. Optical images are another data source that can be used to detect flooded areas due to their high spectral correlation with the open water surface. However, they are often affected by day, night, or severe weather conditions (i.e., cloud). This paper presents a convolution neural network (CNN) based multimodal approach utilizing the advantages of both PolSAR and optical images for flood mapping. First, reference training data is retrieved from optical images by manual annotation. Since clouds may appear in the optical image, only areas with a clear view of flooded or non-flooded are annotated. Then, a semisupervised polarimetric-features-aided CNN is utilized for flood mapping using PolSAR data. The proposed model not only can handle the issue of learning with incomplete ground truth but also can leverage a large portion of unlabelled pixels for learning. Moreover, our model takes the advantages of expert knowledge on scattering interpretation to incorporate polarimetric-features as the input. Experiments results are given for the flood event that occurred in Sendai, Japan, on 12th March 2011. The experiments show that our framework can map flooded area with high accuracy (F1 = 96:12) and outperform conventional flood mapping methods

    Polarimetric Synthetic Aperture Radar

    Get PDF
    This open access book focuses on the practical application of electromagnetic polarimetry principles in Earth remote sensing with an educational purpose. In the last decade, the operations from fully polarimetric synthetic aperture radar such as the Japanese ALOS/PalSAR, the Canadian Radarsat-2 and the German TerraSAR-X and their easy data access for scientific use have developed further the research and data applications at L,C and X band. As a consequence, the wider distribution of polarimetric data sets across the remote sensing community boosted activity and development in polarimetric SAR applications, also in view of future missions. Numerous experiments with real data from spaceborne platforms are shown, with the aim of giving an up-to-date and complete treatment of the unique benefits of fully polarimetric synthetic aperture radar data in five different domains: forest, agriculture, cryosphere, urban and oceans

    Flood mapping in vegetated areas using an unsupervised clustering approach on Sentinel-1 and-2 imagery

    Get PDF
    The European Space Agency's Sentinel-1 constellation provides timely and freely available dual-polarized C-band Synthetic Aperture Radar (SAR) imagery. The launch of these and other SAR sensors has boosted the field of SAR-based flood mapping. However, flood mapping in vegetated areas remains a topic under investigation, as backscatter is the result of a complex mixture of backscattering mechanisms and strongly depends on the wave and vegetation characteristics. In this paper, we present an unsupervised object-based clustering framework capable of mapping flooding in the presence and absence of flooded vegetation based on freely and globally available data only. Based on a SAR image pair, the region of interest is segmented into objects, which are converted to a SAR-optical feature space and clustered using K-means. These clusters are then classified based on automatically determined thresholds, and the resulting classification is refined by means of several region growing post-processing steps. The final outcome discriminates between dry land, permanent water, open flooding, and flooded vegetation. Forested areas, which might hide flooding, are indicated as well. The framework is presented based on four case studies, of which two contain flooded vegetation. For the optimal parameter combination, three-class F1 scores between 0.76 and 0.91 are obtained depending on the case, and the pixel- and object-based thresholding benchmarks are outperformed. Furthermore, this framework allows an easy integration of additional data sources when these become available

    Sentinel-1 InSAR coherence for land cover mapping: a comparison of multiple feature-based classifiers

    Get PDF
    This article investigates and demonstrates the suitability of the Sentinel-1 interferometric coherence for land cover and vegetation mapping. In addition, this study analyzes the performance of this feature along with polarization and intensity products according to different classification strategies and algorithms. Seven different classification workflows were evaluated, covering pixel- and object-based analyses, unsupervised and supervised classification, different machine-learning classifiers, and the various effects of distinct input features in the SAR domain—interferometric coherence, backscattered intensities, and polarization. All classifications followed the Corine land cover nomenclature. Three different study areas in Europe were selected during 2015 and 2016 campaigns to maximize diversity of land cover. Overall accuracies (OA), ranging from 70% to 90%, were achieved depending on the study area and methodology, considering between 9 and 15 classes. The best results were achieved in the rather flat area of Doñana wetlands National Park in Spain (OA 90%), but even the challenging alpine terrain around the city of Merano in northern Italy (OA 77%) obtained promising results. The overall potential of Sentinel-1 interferometric coherence for land cover mapping was evaluated as very good. In all cases, coherence-based results provided higher accuracies than intensity-based strategies, considering 12 days of temporal sampling of the Sentinel-1 A stack. Both coherence and intensity prove to be complementary observables, increasing the overall accuracies in a combined strategy. The accuracy is expected to increase when Sentinel-1 A/B stacks, i.e., six-day sampling, are considered.Peer ReviewedPostprint (published version
    • …
    corecore