618 research outputs found

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    A framework of rapid regional tsunami damage recognition from post-event TerraSAR-X imagery using deep neural networks

    Full text link
    Near real-time building damage mapping is an indispensable prerequisite for governments to make decisions for disaster relief. With high-resolution synthetic aperture radar (SAR) systems, such as TerraSAR-X, the provision of such products in a fast and effective way becomes possible. In this letter, a deep learning-based framework for rapid regional tsunami damage recognition using post-event SAR imagery is proposed. To perform such a rapid damage mapping, a series of tile-based image split analysis is employed to generate the data set. Next, a selection algorithm with the SqueezeNet network is developed to swiftly distinguish between built-up (BU) and nonbuilt-up regions. Finally, a recognition algorithm with a modified wide residual network is developed to classify the BU regions into wash away, collapsed, and slightly damaged regions. Experiments performed on the TerraSAR-X data from the 2011 Tohoku earthquake and tsunami in Japan show a BU region extraction accuracy of 80.4% and a damage-level recognition accuracy of 74.8%, respectively. Our framework takes around 2 h to train on a new region, and only several minutes for prediction.This work was supported in part by JST CREST, Japan, under Grant JPMJCR1411 and in part by the China Scholarship Council. (JPMJCR1411 - JST CREST, Japan; China Scholarship Council

    Two-Phase Object-Based Deep Learning for Multi-Temporal SAR Image Change Detection

    Get PDF
    Change detection is one of the fundamental applications of synthetic aperture radar (SAR) images. However, speckle noise presented in SAR images has a negative effect on change detection, leading to frequent false alarms in the mapping products. In this research, a novel two-phase object-based deep learning approach is proposed for multi-temporal SAR image change detection. Compared with traditional methods, the proposed approach brings two main innovations. One is to classify all pixels into three categories rather than two categories: unchanged pixels, changed pixels caused by strong speckle (false changes), and changed pixels formed by real terrain variation (real changes). The other is to group neighbouring pixels into superpixel objects such as to exploit local spatial context. Two phases are designed in the methodology: (1) Generate objects based on the simple linear iterative clustering (SLIC) algorithm, and discriminate these objects into changed and unchanged classes using fuzzy c-means (FCM) clustering and a deep PCANet. The prediction of this Phase is the set of changed and unchanged superpixels. (2) Deep learning on the pixel sets over the changed superpixels only, obtained in the first phase, to discriminate real changes from false changes. SLIC is employed again to achieve new superpixels in the second phase. Low rank and sparse decomposition are applied to these new superpixels to suppress speckle noise significantly. A further clustering step is applied to these new superpixels via FCM. A new PCANet is then trained to classify two kinds of changed superpixels to achieve the final change maps. Numerical experiments demonstrate that, compared with benchmark methods, the proposed approach can distinguish real changes from false changes effectively with significantly reduced false alarm rates, and achieve up to 99.71% change detection accuracy using multi-temporal SAR imagery

    Coastal Aquaculture Extraction Using GF-3 Fully Polarimetric SAR Imagery: A Framework Integrating UNet++ with Marker-Controlled Watershed Segmentation

    Get PDF
    Coastal aquaculture monitoring is vital for sustainable offshore aquaculture management. However, the dense distribution and various sizes of aquacultures make it challenging to accurately extract the boundaries of aquaculture ponds. In this study, we develop a novel combined framework that integrates UNet++ with a marker-controlled watershed segmentation strategy to facilitate aquaculture boundary extraction from fully polarimetric GaoFen-3 SAR imagery. First, four polarimetric decomposition algorithms were applied to extract 13 polarimetric scattering features. Together with the nine other polarisation and texture features, a total of 22 polarimetric features were then extracted, among which four were optimised according to the separability index. Subsequently, to reduce the “adhesion” phenomenon and separate adjacent and even adhering ponds into individual aquaculture units, two UNet++ subnetworks were utilised to construct the marker and foreground functions, the results of which were then used in the marker-controlled watershed algorithm to obtain refined aquaculture results. A multiclass segmentation strategy that divides the intermediate markers into three categories (aquaculture, background and dikes) was applied to the marker function. In addition, a boundary patch refinement postprocessing strategy was applied to the two subnetworks to extract and repair the complex/error-prone boundaries of the aquaculture ponds, followed by a morphological operation that was conducted for label augmentation. An experimental investigation performed to extract individual aquacultures in the Yancheng Coastal Wetlands indicated that the crucial features for aquacultures are Shannon entropy (SE), the intensity component of SE (SE_I) and the corresponding mean texture features (Mean_SE and Mean_SE_I). When the optimal features were introduced, our proposed method performed better than standard UNet++ in aquaculture extraction, achieving improvements of 1.8%, 3.2%, 21.7% and 12.1% in F1, IoU, MR and insF1, respectively. The experimental results indicate that the proposed method can handle the adhesion of both adjacent objects and unclear boundaries effectively and capture clear and refined aquaculture boundaries

    Sea Ice Extraction via Remote Sensed Imagery: Algorithms, Datasets, Applications and Challenges

    Full text link
    The deep learning, which is a dominating technique in artificial intelligence, has completely changed the image understanding over the past decade. As a consequence, the sea ice extraction (SIE) problem has reached a new era. We present a comprehensive review of four important aspects of SIE, including algorithms, datasets, applications, and the future trends. Our review focuses on researches published from 2016 to the present, with a specific focus on deep learning-based approaches in the last five years. We divided all relegated algorithms into 3 categories, including classical image segmentation approach, machine learning-based approach and deep learning-based methods. We reviewed the accessible ice datasets including SAR-based datasets, the optical-based datasets and others. The applications are presented in 4 aspects including climate research, navigation, geographic information systems (GIS) production and others. It also provides insightful observations and inspiring future research directions.Comment: 24 pages, 6 figure

    Change Detection Techniques with Synthetic Aperture Radar Images: Experiments with Random Forests and Sentinel-1 Observations

    Get PDF
    This work aims to clarify the potential of incoherent and coherent change detection (CD) approaches for detecting and monitoring ground surface changes using sequences of synthetic aperture radar (SAR) images. Nowadays, the growing availability of remotely sensed data collected by the twin Sentinel-1A/B sensors of the European (EU) Copernicus constellation allows fast mapping of damage after a disastrous event using radar data. In this research, we address the role of SAR (amplitude) backscattered signal variations for CD analyses when a natural (e.g., a fire, a flash flood, etc.) or a human-induced (disastrous) event occurs. Then, we consider the additional pieces of information that can be recovered by comparing interferometric coherence maps related to couples of SAR images collected between a principal disastrous event date. This work is mainly concerned with investigating the capability of different coherent/incoherent change detection indices (CDIs) and their mutual interactions for the rapid mapping of "changed" areas. In this context, artificial intelligence (AI) algorithms have been demonstrated to be beneficial for handling the different information coming from coherent/incoherent CDIs in a unique corpus. Specifically, we used CDIs that synthetically describe ground surface changes associated with a disaster event (i.e., the pre-, cross-, and post-disaster phases), based on the generation of sigma nought and InSAR coherence maps. Then, we trained a random forest (RF) to produce CD maps and study the impact on the final binary decision (changed/unchanged) of the different layers representing the available synthetic CDIs. The proposed strategy was effective for quickly assessing damage using SAR data and can be applied in several contexts. Experiments were conducted to monitor wildfire's effects in the 2021 summer season in Italy, considering two case studies in Sardinia and Sicily. Another experiment was also carried out on the coastal city of Houston, Texas, the US, which was affected by a large flood in 2017; thus, demonstrating the validity of the proposed integrated method for fast mapping of flooded zones using SAR data

    Semi-supervised Convolutional Neural Networks for Flood Mapping using Multi-modal Remote Sensing Data

    Full text link
    When floods hit populated areas, quick detection of flooded areas is crucial for initial response by local government, residents, and volunteers. Space-borne polarimetric synthetic aperture radar (PolSAR) is an authoritative data sources for flood mapping since it can be acquired immediately after a disaster even at night time or cloudy weather. Conventionally, a lot of domain-specific heuristic knowledge has been applied for PolSAR flood mapping, but their performance still suffers from confusing pixels caused by irregular reflections of radar waves. Optical images are another data source that can be used to detect flooded areas due to their high spectral correlation with the open water surface. However, they are often affected by day, night, or severe weather conditions (i.e., cloud). This paper presents a convolution neural network (CNN) based multimodal approach utilizing the advantages of both PolSAR and optical images for flood mapping. First, reference training data is retrieved from optical images by manual annotation. Since clouds may appear in the optical image, only areas with a clear view of flooded or non-flooded are annotated. Then, a semisupervised polarimetric-features-aided CNN is utilized for flood mapping using PolSAR data. The proposed model not only can handle the issue of learning with incomplete ground truth but also can leverage a large portion of unlabelled pixels for learning. Moreover, our model takes the advantages of expert knowledge on scattering interpretation to incorporate polarimetric-features as the input. Experiments results are given for the flood event that occurred in Sendai, Japan, on 12th March 2011. The experiments show that our framework can map flooded area with high accuracy (F1 = 96:12) and outperform conventional flood mapping methods
    • …
    corecore