2,335 research outputs found
Multi-Sensor Data Fusion for Cloud Removal in Global and All-Season Sentinel-2 Imagery
This work has been accepted by IEEE TGRS for publication. The majority of
optical observations acquired via spaceborne earth imagery are affected by
clouds. While there is numerous prior work on reconstructing cloud-covered
information, previous studies are oftentimes confined to narrowly-defined
regions of interest, raising the question of whether an approach can generalize
to a diverse set of observations acquired at variable cloud coverage or in
different regions and seasons. We target the challenge of generalization by
curating a large novel data set for training new cloud removal approaches and
evaluate on two recently proposed performance metrics of image quality and
diversity. Our data set is the first publically available to contain a global
sample of co-registered radar and optical observations, cloudy as well as
cloud-free. Based on the observation that cloud coverage varies widely between
clear skies and absolute coverage, we propose a novel model that can deal with
either extremes and evaluate its performance on our proposed data set. Finally,
we demonstrate the superiority of training models on real over synthetic data,
underlining the need for a carefully curated data set of real observations. To
facilitate future research, our data set is made available onlineComment: This work has been accepted by IEEE TGRS for publicatio
The SEN1-2 Dataset for Deep Learning in SAR-Optical Data Fusion
While deep learning techniques have an increasing impact on many technical
fields, gathering sufficient amounts of training data is a challenging problem
in remote sensing. In particular, this holds for applications involving data
from multiple sensors with heterogeneous characteristics. One example for that
is the fusion of synthetic aperture radar (SAR) data and optical imagery. With
this paper, we publish the SEN1-2 dataset to foster deep learning research in
SAR-optical data fusion. SEN1-2 comprises 282,384 pairs of corresponding image
patches, collected from across the globe and throughout all meteorological
seasons. Besides a detailed description of the dataset, we show exemplary
results for several possible applications, such as SAR image colorization,
SAR-optical image matching, and creation of artificial optical images from SAR
input data. Since SEN1-2 is the first large open dataset of this kind, we
believe it will support further developments in the field of deep learning for
remote sensing as well as multi-sensor data fusion.Comment: accepted for publication in the ISPRS Annals of the Photogrammetry,
Remote Sensing and Spatial Information Sciences (online from October 2018
Comparison of convolutional neural networks for cloudy optical images reconstruction from single or multitemporal joint SAR and optical images
With the increasing availability of optical and synthetic aperture radar
(SAR) images thanks to the Sentinel constellation, and the explosion of deep
learning, new methods have emerged in recent years to tackle the reconstruction
of optical images that are impacted by clouds. In this paper, we focus on the
evaluation of convolutional neural networks that use jointly SAR and optical
images to retrieve the missing contents in one single polluted optical image.
We propose a simple framework that ease the creation of datasets for the
training of deep nets targeting optical image reconstruction, and for the
validation of machine learning based or deterministic approaches. These methods
are quite different in terms of input images constraints, and comparing them is
a problematic task not addressed in the literature. We show how space
partitioning data structures help to query samples in terms of cloud coverage,
relative acquisition date, pixel validity and relative proximity between SAR
and optical images. We generate several datasets to compare the reconstructed
images from networks that use a single pair of SAR and optical image, versus
networks that use multiple pairs, and a traditional deterministic approach
performing interpolation in temporal domain.Comment: 17 page
Learning a Joint Embedding of Multiple Satellite Sensors: A Case Study for Lake Ice Monitoring
Fusing satellite imagery acquired with different sensors has been a
long-standing challenge of Earth observation, particularly across different
modalities such as optical and Synthetic Aperture Radar (SAR) images. Here, we
explore the joint analysis of imagery from different sensors in the light of
representation learning: we propose to learn a joint embedding of multiple
satellite sensors within a deep neural network. Our application problem is the
monitoring of lake ice on Alpine lakes. To reach the temporal resolution
requirement of the Swiss Global Climate Observing System (GCOS) office, we
combine three image sources: Sentinel-1 SAR (S1-SAR), Terra MODIS, and
Suomi-NPP VIIRS. The large gaps between the optical and SAR domains and between
the sensor resolutions make this a challenging instance of the sensor fusion
problem. Our approach can be classified as a late fusion that is learned in a
data-driven manner. The proposed network architecture has separate encoding
branches for each image sensor, which feed into a single latent embedding.
I.e., a common feature representation shared by all inputs, such that
subsequent processing steps deliver comparable output irrespective of which
sort of input image was used. By fusing satellite data, we map lake ice at a
temporal resolution of < 1.5 days. The network produces spatially explicit lake
ice maps with pixel-wise accuracies > 91% (respectively, mIoU scores > 60%) and
generalises well across different lakes and winters. Moreover, it sets a new
state-of-the-art for determining the important ice-on and ice-off dates for the
target lakes, in many cases meeting the GCOS requirement
Learning a Joint Embedding of Multiple Satellite Sensors: A Case Study for Lake Ice Monitoring
Fusing satellite imagery acquired with different sensors has been a long-standing challenge of Earth observation, particularly across different modalities such as optical and synthetic aperture radar (SAR) images. Here, we explore the joint analysis of imagery from different sensors in the light of representation learning: we propose to learn a joint embedding of multiple satellite sensors within a deep neural network. Our application problem is the monitoring of lake ice on Alpine lakes. To reach the temporal resolution requirement of the Swiss Global Climate Observing System (GCOS) office, we combine three image sources: Sentinel-1 SAR (S1-SAR), Terra moderate resolution imaging spectroradiometer (MODIS), and Suomi-NPP visible infrared imaging radiometer suite (VIIRS). The large gaps between the optical and SAR domains and between the sensor resolutions make this a challenging instance of the sensor fusion problem. Our approach can be classified as a late fusion that is learned in a data-driven manner. The proposed network architecture has separate encoding branches for each image sensor, which feed into a single latent embedding, i.e., a common feature representation shared by all inputs, such that subsequent processing steps deliver comparable output irrespective of which sort of input image was used. By fusing satellite data, we map lake ice at a temporal resolution of 91% [respectively, mean per-class Intersection-over-Union (mIoU) scores >60%] and generalizes well across different lakes and winters. Moreover, it sets a new state-of-the-art for determining the important ice-on and ice-off dates for the target lakes, in many cases meeting the GCOS requirement
Multi-temporal Sentinel-1 and -2 Data Fusion for Optical Image Simulation
In this paper, we present the optical image simulation from a synthetic
aperture radar (SAR) data using deep learning based methods. Two models, i.e.,
optical image simulation directly from the SAR data and from multi-temporal
SARoptical data, are proposed to testify the possibilities. The deep learning
based methods that we chose to achieve the models are a convolutional neural
network (CNN) with a residual architecture and a conditional generative
adversarial network (cGAN). We validate our models using the Sentinel-1 and -2
datasets. The experiments demonstrate that the model with multi-temporal
SAR-optical data can successfully simulate the optical image, meanwhile, the
model with simple SAR data as input failed. The optical image simulation
results indicate the possibility of SARoptical information blending for the
subsequent applications such as large-scale cloud removal, and optical data
temporal superresolution. We also investigate the sensitivity of the proposed
models against the training samples, and reveal possible future directions
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
- …