2,769 research outputs found
SRDA-Net: Super-Resolution Domain Adaptation Networks for Semantic Segmentation
Recently, Unsupervised Domain Adaptation was proposed to address the domain
shift problem in semantic segmentation task, but it may perform poor when
source and target domains belong to different resolutions. In this work, we
design a novel end-to-end semantic segmentation network, Super-Resolution
Domain Adaptation Network (SRDA-Net), which could simultaneously complete
super-resolution and domain adaptation. Such characteristics exactly meet the
requirement of semantic segmentation for remote sensing images which usually
involve various resolutions. Generally, SRDA-Net includes three deep neural
networks: a Super-Resolution and Segmentation (SRS) model focuses on recovering
high-resolution image and predicting segmentation map; a pixel-level domain
classifier (PDC) tries to distinguish the images from which domains; and
output-space domain classifier (ODC) discriminates pixel label distributions
from which domains. PDC and ODC are considered as the discriminators, and SRS
is treated as the generator. By the adversarial learning, SRS tries to align
the source with target domains on pixel-level visual appearance and
output-space. Experiments are conducted on the two remote sensing datasets with
different resolutions. SRDA-Net performs favorably against the state-of-the-art
methods in terms of accuracy and visual quality. Code and models are available
at https://github.com/tangzhenjie/SRDA-Net
A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery
Semantic segmentation (classification) of Earth Observation imagery is a
crucial task in remote sensing. This paper presents a comprehensive review of
technical factors to consider when designing neural networks for this purpose.
The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural
Networks (RNNs), Generative Adversarial Networks (GANs), and transformer
models, discussing prominent design patterns for these ANN families and their
implications for semantic segmentation. Common pre-processing techniques for
ensuring optimal data preparation are also covered. These include methods for
image normalization and chipping, as well as strategies for addressing data
imbalance in training samples, and techniques for overcoming limited data,
including augmentation techniques, transfer learning, and domain adaptation. By
encompassing both the technical aspects of neural network design and the
data-related considerations, this review provides researchers and practitioners
with a comprehensive and up-to-date understanding of the factors involved in
designing effective neural networks for semantic segmentation of Earth
Observation imagery.Comment: 145 pages with 32 figure
Domain Adaptation for Satellite-Borne Hyperspectral Cloud Detection
The advent of satellite-borne machine learning hardware accelerators has
enabled the on-board processing of payload data using machine learning
techniques such as convolutional neural networks (CNN). A notable example is
using a CNN to detect the presence of clouds in hyperspectral data captured on
Earth observation (EO) missions, whereby only clear sky data is downlinked to
conserve bandwidth. However, prior to deployment, new missions that employ new
sensors will not have enough representative datasets to train a CNN model,
while a model trained solely on data from previous missions will underperform
when deployed to process the data on the new missions. This underperformance
stems from the domain gap, i.e., differences in the underlying distributions of
the data generated by the different sensors in previous and future missions. In
this paper, we address the domain gap problem in the context of on-board
hyperspectral cloud detection. Our main contributions lie in formulating new
domain adaptation tasks that are motivated by a concrete EO mission, developing
a novel algorithm for bandwidth-efficient supervised domain adaptation, and
demonstrating test-time adaptation algorithms on space deployable neural
network accelerators. Our contributions enable minimal data transmission to be
invoked (e.g., only 1% of the weights in ResNet50) to achieve domain
adaptation, thereby allowing more sophisticated CNN models to be deployed and
updated on satellites without being hampered by domain gap and bandwidth
limitations
Application of Convolutional Neural Network in the Segmentation and Classification of High-Resolution Remote Sensing Images
Numerous convolution neural networks increase accuracy of classification for remote sensing scene images at the expense of the models space and time sophistication This causes the model to run slowly and prevents the realization of a trade-off among model accuracy and running time The loss of deep characteristics as the network gets deeper makes it impossible to retrieve the key aspects with a sample double branching structure which is bad for classifying remote sensing scene photo
Remote Sensing Object Detection Meets Deep Learning: A Meta-review of Challenges and Advances
Remote sensing object detection (RSOD), one of the most fundamental and
challenging tasks in the remote sensing field, has received longstanding
attention. In recent years, deep learning techniques have demonstrated robust
feature representation capabilities and led to a big leap in the development of
RSOD techniques. In this era of rapid technical evolution, this review aims to
present a comprehensive review of the recent achievements in deep learning
based RSOD methods. More than 300 papers are covered in this review. We
identify five main challenges in RSOD, including multi-scale object detection,
rotated object detection, weak object detection, tiny object detection, and
object detection with limited supervision, and systematically review the
corresponding methods developed in a hierarchical division manner. We also
review the widely used benchmark datasets and evaluation metrics within the
field of RSOD, as well as the application scenarios for RSOD. Future research
directions are provided for further promoting the research in RSOD.Comment: Accepted with IEEE Geoscience and Remote Sensing Magazine. More than
300 papers relevant to the RSOD filed were reviewed in this surve
Aggregated Deep Local Features for Remote Sensing Image Retrieval
Remote Sensing Image Retrieval remains a challenging topic due to the special
nature of Remote Sensing Imagery. Such images contain various different
semantic objects, which clearly complicates the retrieval task. In this paper,
we present an image retrieval pipeline that uses attentive, local convolutional
features and aggregates them using the Vector of Locally Aggregated Descriptors
(VLAD) to produce a global descriptor. We study various system parameters such
as the multiplicative and additive attention mechanisms and descriptor
dimensionality. We propose a query expansion method that requires no external
inputs. Experiments demonstrate that even without training, the local
convolutional features and global representation outperform other systems.
After system tuning, we can achieve state-of-the-art or competitive results.
Furthermore, we observe that our query expansion method increases overall
system performance by about 3%, using only the top-three retrieved images.
Finally, we show how dimensionality reduction produces compact descriptors with
increased retrieval performance and fast retrieval computation times, e.g. 50%
faster than the current systems.Comment: Published in Remote Sensing. The first two authors have equal
contributio
UCDFormer: Unsupervised Change Detection Using a Transformer-driven Image Translation
Change detection (CD) by comparing two bi-temporal images is a crucial task
in remote sensing. With the advantages of requiring no cumbersome labeled
change information, unsupervised CD has attracted extensive attention in the
community. However, existing unsupervised CD approaches rarely consider the
seasonal and style differences incurred by the illumination and atmospheric
conditions in multi-temporal images. To this end, we propose a change detection
with domain shift setting for remote sensing images. Furthermore, we present a
novel unsupervised CD method using a light-weight transformer, called
UCDFormer. Specifically, a transformer-driven image translation composed of a
light-weight transformer and a domain-specific affinity weight is first
proposed to mitigate domain shift between two images with real-time efficiency.
After image translation, we can generate the difference map between the
translated before-event image and the original after-event image. Then, a novel
reliable pixel extraction module is proposed to select significantly
changed/unchanged pixel positions by fusing the pseudo change maps of fuzzy
c-means clustering and adaptive threshold. Finally, a binary change map is
obtained based on these selected pixel pairs and a binary classifier.
Experimental results on different unsupervised CD tasks with seasonal and style
changes demonstrate the effectiveness of the proposed UCDFormer. For example,
compared with several other related methods, UCDFormer improves performance on
the Kappa coefficient by more than 12\%. In addition, UCDFormer achieves
excellent performance for earthquake-induced landslide detection when
considering large-scale applications. The code is available at
\url{https://github.com/zhu-xlab/UCDFormer}Comment: 16 pages, 7 figures, IEEE Transactions on Geoscience and Remote
Sensin
- …