43 research outputs found
An Overview on the Generation and Detection of Synthetic and Manipulated Satellite Images
Due to the reduction of technological costs and the increase of satellites
launches, satellite images are becoming more popular and easier to obtain.
Besides serving benevolent purposes, satellite data can also be used for
malicious reasons such as misinformation. As a matter of fact, satellite images
can be easily manipulated relying on general image editing tools. Moreover,
with the surge of Deep Neural Networks (DNNs) that can generate realistic
synthetic imagery belonging to various domains, additional threats related to
the diffusion of synthetically generated satellite images are emerging. In this
paper, we review the State of the Art (SOTA) on the generation and manipulation
of satellite images. In particular, we focus on both the generation of
synthetic satellite imagery from scratch, and the semantic manipulation of
satellite images by means of image-transfer technologies, including the
transformation of images obtained from one type of sensor to another one. We
also describe forensic detection techniques that have been researched so far to
classify and detect synthetic image forgeries. While we focus mostly on
forensic techniques explicitly tailored to the detection of AI-generated
synthetic contents, we also review some methods designed for general splicing
detection, which can in principle also be used to spot AI manipulate imagesComment: 25 pages, 17 figures, 5 tables, APSIPA 202
Multi-Sensor Data Fusion for Cloud Removal in Global and All-Season Sentinel-2 Imagery
This work has been accepted by IEEE TGRS for publication. The majority of
optical observations acquired via spaceborne earth imagery are affected by
clouds. While there is numerous prior work on reconstructing cloud-covered
information, previous studies are oftentimes confined to narrowly-defined
regions of interest, raising the question of whether an approach can generalize
to a diverse set of observations acquired at variable cloud coverage or in
different regions and seasons. We target the challenge of generalization by
curating a large novel data set for training new cloud removal approaches and
evaluate on two recently proposed performance metrics of image quality and
diversity. Our data set is the first publically available to contain a global
sample of co-registered radar and optical observations, cloudy as well as
cloud-free. Based on the observation that cloud coverage varies widely between
clear skies and absolute coverage, we propose a novel model that can deal with
either extremes and evaluate its performance on our proposed data set. Finally,
we demonstrate the superiority of training models on real over synthetic data,
underlining the need for a carefully curated data set of real observations. To
facilitate future research, our data set is made available onlineComment: This work has been accepted by IEEE TGRS for publicatio
Cloud Removal in Sentinel-2 Imagery using a Deep Residual Neural Network and SAR-Optical Data Fusion
Optical remote sensing imagery is at the core of many Earth observation activities. The regular, consistent and global-scale nature of the satellite data is exploited in many applications, such as cropland monitoring, climate change assessment, land-cover and land-use classification, and disaster assessment. However, one main problem severely affects the temporal and spatial availability of surface observations, namely cloud cover. The task of removing clouds from optical images has been subject of studies since decades. The advent of the Big Data era in satellite remote sensing opens new possibilities for tackling the problem using powerful data-driven deep learning methods. In this paper, a deep residual neural network architecture is designed to remove clouds from multispectral Sentinel-2 imagery. SAR-optical data fusion is used to exploit the synergistic properties of the two imaging systems to guide the image reconstruction. Additionally, a novel cloud-adaptive loss is proposed to maximize the retainment of original information. The network is trained and tested on a globally sampled dataset comprising real cloudy and cloud-free images. The proposed setup allows to remove even optically thick clouds by reconstructing an optical representation of the underlying land surface structure
The SEN1-2 Dataset for Deep Learning in SAR-Optical Data Fusion
While deep learning techniques have an increasing impact on many technical
fields, gathering sufficient amounts of training data is a challenging problem
in remote sensing. In particular, this holds for applications involving data
from multiple sensors with heterogeneous characteristics. One example for that
is the fusion of synthetic aperture radar (SAR) data and optical imagery. With
this paper, we publish the SEN1-2 dataset to foster deep learning research in
SAR-optical data fusion. SEN1-2 comprises 282,384 pairs of corresponding image
patches, collected from across the globe and throughout all meteorological
seasons. Besides a detailed description of the dataset, we show exemplary
results for several possible applications, such as SAR image colorization,
SAR-optical image matching, and creation of artificial optical images from SAR
input data. Since SEN1-2 is the first large open dataset of this kind, we
believe it will support further developments in the field of deep learning for
remote sensing as well as multi-sensor data fusion.Comment: accepted for publication in the ISPRS Annals of the Photogrammetry,
Remote Sensing and Spatial Information Sciences (online from October 2018
Deep internal learning for inpainting of cloud-affected regions in satellite imagery
Cloud cover remains a significant limitation to a broad range of applications relying on optical remote sensing imagery, including crop identification/yield prediction, climate monitoring, and land cover classification. A common approach to cloud removal treats the problem as an inpainting task and imputes optical data in the cloud-affected regions employing either mosaicing historical data or making use of sensing modalities not impacted by cloud obstructions, such as SAR. Recently, deep learning approaches have been explored in these applications; however, the majority of reported solutions rely on external learning practices, i.e., models trained on fixed datasets. Although these models perform well within the context of a particular dataset, a significant risk of spatial and temporal overfitting exists when applied in different locations or at different times. Here, cloud removal was implemented within an internal learning regime through an inpainting technique based on the deep image prior. The approach was evaluated on both a synthetic dataset with an exact ground truth, as well as real samples. The ability to inpaint the cloud-affected regions for varying weather conditions across a whole year with no prior training was demonstrated, and the performance of the approach was characterised
Using generative adversarial networks for extraction of insar signals from large-scale Sentinel-1 interferograms by improving tropospheric noise correction
Spatiotemporal variations of pressure, temperature, water vapour content in the atmosphere lead to significant delays in interferometric synthetic aperture radar (InSAR) measurements of deformations in the ground. One of the key challenges in increasing the accuracy of ground deformation measurements using InSAR is to produce robust estimates of the tropospheric delay. Tropospheric models like ERA-Interim can be used to estimate the total tropospheric delay in interferograms in remote areas. The problem with using ERA-Interim model for interferogram correction is that after the tropospheric correction, there are still some residuals left in the interferograms, which can be mainly attributed to turbulent troposphere. In this study, we propose a Generative Adversarial Network (GAN) based approach to mitigate the phase delay caused by troposphere. In this method, we implement a noise to noise model, where the network is trained only with the interferograms corrupted by tropospheric noise. We applied the technique over 116 large scale 800 km long interfergrams formed from Sentinel-1 acquisitions covering a period from 25th October, 2014 to 2nd November, 2017 from descending track numbered 108 over Iran. Our approach reduces the root mean square of the phase values of the interferogram 64% compared to those of the original interferogram and by 55% in comparison to the corresponding ERA-Interim corrected version