633 research outputs found
Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X
Contrary to optical images, Synthetic Aperture Radar (SAR) images are in
different electromagnetic spectrum where the human visual system is not
accustomed to. Thus, with more and more SAR applications, the demand for
enhanced high-quality SAR images has increased considerably. However,
high-quality SAR images entail high costs due to the limitations of current SAR
devices and their image processing resources. To improve the quality of SAR
images and to reduce the costs of their generation, we propose a Dialectical
Generative Adversarial Network (Dialectical GAN) to generate high-quality SAR
images. This method is based on the analysis of hierarchical SAR information
and the "dialectical" structure of GAN frameworks. As a demonstration, a
typical example will be shown where a low-resolution SAR image (e.g., a
Sentinel-1 image) with large ground coverage is translated into a
high-resolution SAR image (e.g., a TerraSAR-X image). Three traditional
algorithms are compared, and a new algorithm is proposed based on a network
framework by combining conditional WGAN-GP (Wasserstein Generative Adversarial
Network - Gradient Penalty) loss functions and Spatial Gram matrices under the
rule of dialectics. Experimental results show that the SAR image translation
works very well when we compare the results of our proposed method with the
selected traditional methods.Comment: 22 pages, 15 figure
An Overview on the Generation and Detection of Synthetic and Manipulated Satellite Images
Due to the reduction of technological costs and the increase of satellites
launches, satellite images are becoming more popular and easier to obtain.
Besides serving benevolent purposes, satellite data can also be used for
malicious reasons such as misinformation. As a matter of fact, satellite images
can be easily manipulated relying on general image editing tools. Moreover,
with the surge of Deep Neural Networks (DNNs) that can generate realistic
synthetic imagery belonging to various domains, additional threats related to
the diffusion of synthetically generated satellite images are emerging. In this
paper, we review the State of the Art (SOTA) on the generation and manipulation
of satellite images. In particular, we focus on both the generation of
synthetic satellite imagery from scratch, and the semantic manipulation of
satellite images by means of image-transfer technologies, including the
transformation of images obtained from one type of sensor to another one. We
also describe forensic detection techniques that have been researched so far to
classify and detect synthetic image forgeries. While we focus mostly on
forensic techniques explicitly tailored to the detection of AI-generated
synthetic contents, we also review some methods designed for general splicing
detection, which can in principle also be used to spot AI manipulate imagesComment: 25 pages, 17 figures, 5 tables, APSIPA 202
A Benchmarking Protocol for SAR Colorization: From Regression to Deep Learning Approaches
Synthetic aperture radar (SAR) images are widely used in remote sensing.
Interpreting SAR images can be challenging due to their intrinsic speckle noise
and grayscale nature. To address this issue, SAR colorization has emerged as a
research direction to colorize gray scale SAR images while preserving the
original spatial information and radiometric information. However, this
research field is still in its early stages, and many limitations can be
highlighted. In this paper, we propose a full research line for supervised
learning-based approaches to SAR colorization. Our approach includes a protocol
for generating synthetic color SAR images, several baselines, and an effective
method based on the conditional generative adversarial network (cGAN) for SAR
colorization. We also propose numerical assessment metrics for the problem at
hand. To our knowledge, this is the first attempt to propose a research line
for SAR colorization that includes a protocol, a benchmark, and a complete
performance evaluation. Our extensive tests demonstrate the effectiveness of
our proposed cGAN-based network for SAR colorization. The code will be made
publicly available.Comment: 16 pages, 16 figures, 6 table
Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure
The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this paper, a novel method based on the combination of a deep learning architecture and a correlation-type area-based functional is proposed for the registration of a multisensor pair of images, including an optical image and a synthetic aperture radar (SAR) image. The method makes use of a conditional generative adversarial network (cGAN) in order to address image-to-image translation across the optical and SAR data sources. Then, once the optical and SAR data are brought to a common domain, an area-based â„“2 similarity measure is used together with the COBYLA constrained maximization algorithm for registration purposes. While correlation-type functionals are usually ineffective in the application to multisensor registration, exploiting the image-to-image translation capabilities of cGAN architectures allows moving the complexity of the comparison to the domain adaptation step, thus enabling the use of a simple â„“2 similarity measure, favoring high computational efficiency, and opening the possibility to process a large amount of data at runtime. Experiments with multispectral and panchromatic optical data combined with SAR images suggest the effectiveness of this strategy and the capability of the proposed method to achieve more accurate registration as compared to state-of-the-art approaches
SAR-to-Optical Image Translation via Thermodynamics-inspired Network
Synthetic aperture radar (SAR) is prevalent in the remote sensing field but
is difficult to interpret in human visual perception. Recently, SAR-to-optical
(S2O) image conversion methods have provided a prospective solution for
interpretation. However, since there is a huge domain difference between
optical and SAR images, they suffer from low image quality and geometric
distortion in the produced optical images. Motivated by the analogy between
pixels during the S2O image translation and molecules in a heat field,
Thermodynamics-inspired Network for SAR-to-Optical Image Translation (S2O-TDN)
is proposed in this paper. Specifically, we design a Third-order Finite
Difference (TFD) residual structure in light of the TFD equation of
thermodynamics, which allows us to efficiently extract inter-domain invariant
features and facilitate the learning of the nonlinear translation mapping. In
addition, we exploit the first law of thermodynamics (FLT) to devise an
FLT-guided branch that promotes the state transition of the feature values from
the unstable diffusion state to the stable one, aiming to regularize the
feature diffusion and preserve image structures during S2O image translation.
S2O-TDN follows an explicit design principle derived from thermodynamic theory
and enjoys the advantage of explainability. Experiments on the public SEN1-2
dataset show the advantages of the proposed S2O-TDN over the current methods
with more delicate textures and higher quantitative results
Multi-Sensor Data Fusion for Cloud Removal in Global and All-Season Sentinel-2 Imagery
This work has been accepted by IEEE TGRS for publication. The majority of
optical observations acquired via spaceborne earth imagery are affected by
clouds. While there is numerous prior work on reconstructing cloud-covered
information, previous studies are oftentimes confined to narrowly-defined
regions of interest, raising the question of whether an approach can generalize
to a diverse set of observations acquired at variable cloud coverage or in
different regions and seasons. We target the challenge of generalization by
curating a large novel data set for training new cloud removal approaches and
evaluate on two recently proposed performance metrics of image quality and
diversity. Our data set is the first publically available to contain a global
sample of co-registered radar and optical observations, cloudy as well as
cloud-free. Based on the observation that cloud coverage varies widely between
clear skies and absolute coverage, we propose a novel model that can deal with
either extremes and evaluate its performance on our proposed data set. Finally,
we demonstrate the superiority of training models on real over synthetic data,
underlining the need for a carefully curated data set of real observations. To
facilitate future research, our data set is made available onlineComment: This work has been accepted by IEEE TGRS for publicatio
- …