455 research outputs found
Deep Image Translation With an Affinity-Based Change Prior for Unsupervised Multimodal Change Detection
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Image translation with convolutional neural networks has recently been used as an approach to multimodal change detection. Existing approaches train the networks by exploiting supervised information of the change areas, which, however, is not always available. A main challenge in the unsupervised problem setting is to avoid that change pixels affect the learning of the translation function. We propose two new network architectures trained with loss functions weighted by priors that reduce the impact of change pixels on the learning objective. The change prior is derived in an unsupervised fashion from relational pixel information captured by domain-specific affinity matrices. Specifically, we use the vertex degrees associated with an absolute affinity difference matrix and demonstrate their utility in combination with cycle consistency and adversarial training. The proposed neural networks are compared with the state-of-the-art algorithms. Experiments conducted on three real data sets show the effectiveness of our methodology
SAR2EO: A High-resolution Image Translation Framework with Denoising Enhancement
Synthetic Aperture Radar (SAR) to electro-optical (EO) image translation is a
fundamental task in remote sensing that can enrich the dataset by fusing
information from different sources. Recently, many methods have been proposed
to tackle this task, but they are still difficult to complete the conversion
from low-resolution images to high-resolution images. Thus, we propose a
framework, SAR2EO, aiming at addressing this challenge. Firstly, to generate
high-quality EO images, we adopt the coarse-to-fine generator, multi-scale
discriminators, and improved adversarial loss in the pix2pixHD model to
increase the synthesis quality. Secondly, we introduce a denoising module to
remove the noise in SAR images, which helps to suppress the noise while
preserving the structural information of the images. To validate the
effectiveness of the proposed framework, we conduct experiments on the dataset
of the Multi-modal Aerial View Imagery Challenge (MAVIC), which consists of
large-scale SAR and EO image pairs. The experimental results demonstrate the
superiority of our proposed framework, and we win the first place in the MAVIC
held in CVPR PBVS 2023
Multi-Sensor Data Fusion for Cloud Removal in Global and All-Season Sentinel-2 Imagery
This work has been accepted by IEEE TGRS for publication. The majority of
optical observations acquired via spaceborne earth imagery are affected by
clouds. While there is numerous prior work on reconstructing cloud-covered
information, previous studies are oftentimes confined to narrowly-defined
regions of interest, raising the question of whether an approach can generalize
to a diverse set of observations acquired at variable cloud coverage or in
different regions and seasons. We target the challenge of generalization by
curating a large novel data set for training new cloud removal approaches and
evaluate on two recently proposed performance metrics of image quality and
diversity. Our data set is the first publically available to contain a global
sample of co-registered radar and optical observations, cloudy as well as
cloud-free. Based on the observation that cloud coverage varies widely between
clear skies and absolute coverage, we propose a novel model that can deal with
either extremes and evaluate its performance on our proposed data set. Finally,
we demonstrate the superiority of training models on real over synthetic data,
underlining the need for a carefully curated data set of real observations. To
facilitate future research, our data set is made available onlineComment: This work has been accepted by IEEE TGRS for publicatio
Deep Image Translation with an Affinity-Based Change Prior for Unsupervised Multimodal Change Detection
Image translation with convolutional neural networks has recently been used as an approach to multimodal change detection. Existing approaches train the networks by exploiting supervised information of the change areas, which, however, is not always available. A main challenge in the unsupervised problem setting is to avoid that change pixels affect the learning of the translation function. We propose two new network architectures trained with loss functions weighted by priors that reduce the impact of change pixels on the learning objective. The change prior is derived in an unsupervised fashion from relational pixel information captured by domain-specific affinity matrices. Specifically, we use the vertex degrees associated with an absolute affinity difference matrix and demonstrate their utility in combination with cycle consistency and adversarial training. The proposed neural networks are compared with the state-of-the-art algorithms. Experiments conducted on three real data sets show the effectiveness of our methodology
A Benchmarking Protocol for SAR Colorization: From Regression to Deep Learning Approaches
Synthetic aperture radar (SAR) images are widely used in remote sensing.
Interpreting SAR images can be challenging due to their intrinsic speckle noise
and grayscale nature. To address this issue, SAR colorization has emerged as a
research direction to colorize gray scale SAR images while preserving the
original spatial information and radiometric information. However, this
research field is still in its early stages, and many limitations can be
highlighted. In this paper, we propose a full research line for supervised
learning-based approaches to SAR colorization. Our approach includes a protocol
for generating synthetic color SAR images, several baselines, and an effective
method based on the conditional generative adversarial network (cGAN) for SAR
colorization. We also propose numerical assessment metrics for the problem at
hand. To our knowledge, this is the first attempt to propose a research line
for SAR colorization that includes a protocol, a benchmark, and a complete
performance evaluation. Our extensive tests demonstrate the effectiveness of
our proposed cGAN-based network for SAR colorization. The code will be made
publicly available.Comment: 16 pages, 16 figures, 6 table
Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure
The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this paper, a novel method based on the combination of a deep learning architecture and a correlation-type area-based functional is proposed for the registration of a multisensor pair of images, including an optical image and a synthetic aperture radar (SAR) image. The method makes use of a conditional generative adversarial network (cGAN) in order to address image-to-image translation across the optical and SAR data sources. Then, once the optical and SAR data are brought to a common domain, an area-based â„“2 similarity measure is used together with the COBYLA constrained maximization algorithm for registration purposes. While correlation-type functionals are usually ineffective in the application to multisensor registration, exploiting the image-to-image translation capabilities of cGAN architectures allows moving the complexity of the comparison to the domain adaptation step, thus enabling the use of a simple â„“2 similarity measure, favoring high computational efficiency, and opening the possibility to process a large amount of data at runtime. Experiments with multispectral and panchromatic optical data combined with SAR images suggest the effectiveness of this strategy and the capability of the proposed method to achieve more accurate registration as compared to state-of-the-art approaches
- …