269 research outputs found
Multi-Sensor Data Fusion for Cloud Removal in Global and All-Season Sentinel-2 Imagery
This work has been accepted by IEEE TGRS for publication. The majority of
optical observations acquired via spaceborne earth imagery are affected by
clouds. While there is numerous prior work on reconstructing cloud-covered
information, previous studies are oftentimes confined to narrowly-defined
regions of interest, raising the question of whether an approach can generalize
to a diverse set of observations acquired at variable cloud coverage or in
different regions and seasons. We target the challenge of generalization by
curating a large novel data set for training new cloud removal approaches and
evaluate on two recently proposed performance metrics of image quality and
diversity. Our data set is the first publically available to contain a global
sample of co-registered radar and optical observations, cloudy as well as
cloud-free. Based on the observation that cloud coverage varies widely between
clear skies and absolute coverage, we propose a novel model that can deal with
either extremes and evaluate its performance on our proposed data set. Finally,
we demonstrate the superiority of training models on real over synthetic data,
underlining the need for a carefully curated data set of real observations. To
facilitate future research, our data set is made available onlineComment: This work has been accepted by IEEE TGRS for publicatio
Multi-temporal Sentinel-1 and -2 Data Fusion for Optical Image Simulation
In this paper, we present the optical image simulation from a synthetic
aperture radar (SAR) data using deep learning based methods. Two models, i.e.,
optical image simulation directly from the SAR data and from multi-temporal
SARoptical data, are proposed to testify the possibilities. The deep learning
based methods that we chose to achieve the models are a convolutional neural
network (CNN) with a residual architecture and a conditional generative
adversarial network (cGAN). We validate our models using the Sentinel-1 and -2
datasets. The experiments demonstrate that the model with multi-temporal
SAR-optical data can successfully simulate the optical image, meanwhile, the
model with simple SAR data as input failed. The optical image simulation
results indicate the possibility of SARoptical information blending for the
subsequent applications such as large-scale cloud removal, and optical data
temporal superresolution. We also investigate the sensitivity of the proposed
models against the training samples, and reveal possible future directions
An Overview on the Generation and Detection of Synthetic and Manipulated Satellite Images
Due to the reduction of technological costs and the increase of satellites
launches, satellite images are becoming more popular and easier to obtain.
Besides serving benevolent purposes, satellite data can also be used for
malicious reasons such as misinformation. As a matter of fact, satellite images
can be easily manipulated relying on general image editing tools. Moreover,
with the surge of Deep Neural Networks (DNNs) that can generate realistic
synthetic imagery belonging to various domains, additional threats related to
the diffusion of synthetically generated satellite images are emerging. In this
paper, we review the State of the Art (SOTA) on the generation and manipulation
of satellite images. In particular, we focus on both the generation of
synthetic satellite imagery from scratch, and the semantic manipulation of
satellite images by means of image-transfer technologies, including the
transformation of images obtained from one type of sensor to another one. We
also describe forensic detection techniques that have been researched so far to
classify and detect synthetic image forgeries. While we focus mostly on
forensic techniques explicitly tailored to the detection of AI-generated
synthetic contents, we also review some methods designed for general splicing
detection, which can in principle also be used to spot AI manipulate imagesComment: 25 pages, 17 figures, 5 tables, APSIPA 202
The SEN1-2 Dataset for Deep Learning in SAR-Optical Data Fusion
While deep learning techniques have an increasing impact on many technical
fields, gathering sufficient amounts of training data is a challenging problem
in remote sensing. In particular, this holds for applications involving data
from multiple sensors with heterogeneous characteristics. One example for that
is the fusion of synthetic aperture radar (SAR) data and optical imagery. With
this paper, we publish the SEN1-2 dataset to foster deep learning research in
SAR-optical data fusion. SEN1-2 comprises 282,384 pairs of corresponding image
patches, collected from across the globe and throughout all meteorological
seasons. Besides a detailed description of the dataset, we show exemplary
results for several possible applications, such as SAR image colorization,
SAR-optical image matching, and creation of artificial optical images from SAR
input data. Since SEN1-2 is the first large open dataset of this kind, we
believe it will support further developments in the field of deep learning for
remote sensing as well as multi-sensor data fusion.Comment: accepted for publication in the ISPRS Annals of the Photogrammetry,
Remote Sensing and Spatial Information Sciences (online from October 2018
A Benchmarking Protocol for SAR Colorization: From Regression to Deep Learning Approaches
Synthetic aperture radar (SAR) images are widely used in remote sensing.
Interpreting SAR images can be challenging due to their intrinsic speckle noise
and grayscale nature. To address this issue, SAR colorization has emerged as a
research direction to colorize gray scale SAR images while preserving the
original spatial information and radiometric information. However, this
research field is still in its early stages, and many limitations can be
highlighted. In this paper, we propose a full research line for supervised
learning-based approaches to SAR colorization. Our approach includes a protocol
for generating synthetic color SAR images, several baselines, and an effective
method based on the conditional generative adversarial network (cGAN) for SAR
colorization. We also propose numerical assessment metrics for the problem at
hand. To our knowledge, this is the first attempt to propose a research line
for SAR colorization that includes a protocol, a benchmark, and a complete
performance evaluation. Our extensive tests demonstrate the effectiveness of
our proposed cGAN-based network for SAR colorization. The code will be made
publicly available.Comment: 16 pages, 16 figures, 6 table
Cloud Removal in Sentinel-2 Imagery using a Deep Residual Neural Network and SAR-Optical Data Fusion
Optical remote sensing imagery is at the core of many Earth observation activities. The regular, consistent and global-scale nature of the satellite data is exploited in many applications, such as cropland monitoring, climate change assessment, land-cover and land-use classification, and disaster assessment. However, one main problem severely affects the temporal and spatial availability of surface observations, namely cloud cover. The task of removing clouds from optical images has been subject of studies since decades. The advent of the Big Data era in satellite remote sensing opens new possibilities for tackling the problem using powerful data-driven deep learning methods. In this paper, a deep residual neural network architecture is designed to remove clouds from multispectral Sentinel-2 imagery. SAR-optical data fusion is used to exploit the synergistic properties of the two imaging systems to guide the image reconstruction. Additionally, a novel cloud-adaptive loss is proposed to maximize the retainment of original information. The network is trained and tested on a globally sampled dataset comprising real cloudy and cloud-free images. The proposed setup allows to remove even optically thick clouds by reconstructing an optical representation of the underlying land surface structure
Manipulation and generation of synthetic satellite images using deep learning models
Generation and manipulation of digital images based on deep learning (DL) are receiving increasing attention for both benign and malevolent uses. As the importance of satellite imagery is increasing, DL has started being used also for the generation of synthetic satellite images. However, the direct use of techniques developed for computer vision applications is not possible, due to the different nature of satellite images. The goal of our work is to describe a number of methods to generate manipulated and synthetic satellite images. To be specific, we focus on two different types of manipulations: full image modification and local splicing. In the former case, we rely on generative adversarial networks commonly used for style transfer applications, adapting them to implement two different kinds of transfer: (i) land cover transfer, aiming at modifying the image content from vegetation to barren and vice versa and (ii) season transfer, aiming at modifying the image content from winter to summer and vice versa. With regard to local splicing, we present two different architectures. The first one uses image generative pretrained transformer and is trained on pixel sequences in order to predict pixels in semantically consistent regions identified using watershed segmentation. The second technique uses a vision transformer operating on image patches rather than on a pixel by pixel basis. We use the trained vision transformer to generate synthetic image segments and splice them into a selected region of the to-be-manipulated image. All the proposed methods generate highly realistic, synthetic, and satellite images. Among the possible applications of the proposed techniques, we mention the generation of proper datasets for the evaluation and training of tools for the analysis of satellite images. (c) The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI
- …