92 research outputs found
Multi-Modal Deep Learning for Multi-Temporal Urban Mapping With a Partly Missing Optical Modality
This paper proposes a novel multi-temporal urban mapping approach using
multi-modal satellite data from the Sentinel-1 Synthetic Aperture Radar (SAR)
and Sentinel-2 MultiSpectral Instrument (MSI) missions. In particular, it
focuses on the problem of a partly missing optical modality due to clouds. The
proposed model utilizes two networks to extract features from each modality
separately. In addition, a reconstruction network is utilized to approximate
the optical features based on the SAR data in case of a missing optical
modality. Our experiments on a multi-temporal urban mapping dataset with
Sentinel-1 SAR and Sentinel-2 MSI data demonstrate that the proposed method
outperforms a multi-modal approach that uses zero values as a replacement for
missing optical data, as well as a uni-modal SAR-based approach. Therefore, the
proposed method is effective in exploiting multi-modal data, if available, but
it also retains its effectiveness in case the optical modality is missing.Comment: 4 pages, 2 figures, accepted for publication in the IGARSS 2023
Proceeding
A CNN regression model to estimate buildings height maps using Sentinel-1 SAR and Sentinel-2 MSI time series
Accurate estimation of building heights is essential for urban planning,
infrastructure management, and environmental analysis. In this study, we
propose a supervised Multimodal Building Height Regression Network (MBHR-Net)
for estimating building heights at 10m spatial resolution using Sentinel-1 (S1)
and Sentinel-2 (S2) satellite time series. S1 provides Synthetic Aperture Radar
(SAR) data that offers valuable information on building structures, while S2
provides multispectral data that is sensitive to different land cover types,
vegetation phenology, and building shadows. Our MBHR-Net aims to extract
meaningful features from the S1 and S2 images to learn complex spatio-temporal
relationships between image patterns and building heights. The model is trained
and tested in 10 cities in the Netherlands. Root Mean Squared Error (RMSE),
Intersection over Union (IOU), and R-squared (R2) score metrics are used to
evaluate the performance of the model. The preliminary results (3.73m RMSE,
0.95 IoU, 0.61 R2) demonstrate the effectiveness of our deep learning model in
accurately estimating building heights, showcasing its potential for urban
planning, environmental impact analysis, and other related applications
Attentive Dual Stream Siamese U-net for Flood Detection on Multi-temporal Sentinel-1 Data
Due to climate and land-use change, natural disasters such as flooding have
been increasing in recent years. Timely and reliable flood detection and
mapping can help emergency response and disaster management. In this work, we
propose a flood detection network using bi-temporal SAR acquisitions. The
proposed segmentation network has an encoder-decoder architecture with two
Siamese encoders for pre and post-flood images. The network's feature maps are
fused and enhanced using attention blocks to achieve more accurate detection of
the flooded areas. Our proposed network is evaluated on publicly available
Sen1Flood11 benchmark dataset. The network outperformed the existing
state-of-the-art (uni-temporal) flood detection method by 6\% IOU. The
experiments highlight that the combination of bi-temporal SAR data with an
effective network architecture achieves more accurate flood detection than
uni-temporal methods.Comment: Accepted in IGARSS202
Investigating Imbalances Between SAR and Optical Utilization for Multi-Modal Urban Mapping
Accurate urban maps provide essential information to support sustainable
urban development. Recent urban mapping methods use multi-modal deep neural
networks to fuse Synthetic Aperture Radar (SAR) and optical data. However,
multi-modal networks may rely on just one modality due to the greedy nature of
learning. In turn, the imbalanced utilization of modalities can negatively
affect the generalization ability of a network. In this paper, we investigate
the utilization of SAR and optical data for urban mapping. To that end, a
dual-branch network architecture using intermediate fusion modules to share
information between the uni-modal branches is utilized. A cut-off mechanism in
the fusion modules enables the stopping of information flow between the
branches, which is used to estimate the network's dependence on SAR and optical
data. While our experiments on the SEN12 Global Urban Mapping dataset show that
good performance can be achieved with conventional SAR-optical data fusion (F1
score = 0.682 0.014), we also observed a clear under-utilization of
optical data. Therefore, future work is required to investigate whether a more
balanced utilization of SAR and optical data can lead to performance
improvements.Comment: 4 pages, 3 figures, accepted for publication in the JURSE 2023
Proceeding
Deep attentive fusion network for flood detection on uni-temporal Sentinel-1 data
Floods are occurring across the globe, and due to climate change, flood events are expected to increase in the coming years. Current situations urge more focus on efficient monitoring of floods and detecting impacted areas. In this study, we propose two segmentation networks for flood detection on uni-temporal Sentinel-1 Synthetic Aperture Radar data. The first network is “Attentive U-Net”. It takes VV, VH, and the ratio VV/VH as input. The network uses spatial and channel-wise attention to enhance feature maps which help in learning better segmentation. “Attentive U-Net” yields 67% Intersection Over Union (IoU) on the Sen1Floods11 dataset, which is 3% better than the benchmark IoU. The second proposed network is a dual-stream “Fusion network”, where we fuse global low-resolution elevation data and permanent water masks with Sentinel-1 (VV, VH) data. Compared to the previous benchmark on the Sen1Floods11 dataset, our fusion network gave a 4.5% better IoU score. Quantitatively, the performance improvement of both proposed methods is considerable. The quantitative comparison with the benchmark method demonstrates the potential of our proposed flood detection networks. The results are further validated by qualitative analysis, in which we demonstrate that the addition of a low-resolution elevation and a permanent water mask enhances the flood detection results. Through ablation experiments and analysis we also demonstrate the effectiveness of various design choices in proposed networks. Our code is available on Github at https://github.com/RituYadav92/UNI_TEMP_FLOOD_DETECTION for reuse
- …