Semi-supervised Convolutional Neural Networks for Flood Mapping using Multi-modal Remote Sensing Data

Abstract

When floods hit populated areas, quick detection of flooded areas is crucial for initial response by local government, residents, and volunteers. Space-borne polarimetric synthetic aperture radar (PolSAR) is an authoritative data sources for flood mapping since it can be acquired immediately after a disaster even at night time or cloudy weather. Conventionally, a lot of domain-specific heuristic knowledge has been applied for PolSAR flood mapping, but their performance still suffers from confusing pixels caused by irregular reflections of radar waves. Optical images are another data source that can be used to detect flooded areas due to their high spectral correlation with the open water surface. However, they are often affected by day, night, or severe weather conditions (i.e., cloud). This paper presents a convolution neural network (CNN) based multimodal approach utilizing the advantages of both PolSAR and optical images for flood mapping. First, reference training data is retrieved from optical images by manual annotation. Since clouds may appear in the optical image, only areas with a clear view of flooded or non-flooded are annotated. Then, a semisupervised polarimetric-features-aided CNN is utilized for flood mapping using PolSAR data. The proposed model not only can handle the issue of learning with incomplete ground truth but also can leverage a large portion of unlabelled pixels for learning. Moreover, our model takes the advantages of expert knowledge on scattering interpretation to incorporate polarimetric-features as the input. Experiments results are given for the flood event that occurred in Sendai, Japan, on 12th March 2011. The experiments show that our framework can map flooded area with high accuracy (F1 = 96:12) and outperform conventional flood mapping methods

    Similar works