2,309 research outputs found
Learning Spectral-Spatial-Temporal Features via a Recurrent Convolutional Neural Network for Change Detection in Multispectral Imagery
Change detection is one of the central problems in earth observation and was
extensively investigated over recent decades. In this paper, we propose a novel
recurrent convolutional neural network (ReCNN) architecture, which is trained
to learn a joint spectral-spatial-temporal feature representation in a unified
framework for change detection in multispectral images. To this end, we bring
together a convolutional neural network (CNN) and a recurrent neural network
(RNN) into one end-to-end network. The former is able to generate rich
spectral-spatial feature representations, while the latter effectively analyzes
temporal dependency in bi-temporal images. In comparison with previous
approaches to change detection, the proposed network architecture possesses
three distinctive properties: 1) It is end-to-end trainable, in contrast to
most existing methods whose components are separately trained or computed; 2)
it naturally harnesses spatial information that has been proven to be
beneficial to change detection task; 3) it is capable of adaptively learning
the temporal dependency between multitemporal images, unlike most of algorithms
that use fairly simple operation like image differencing or stacking. As far as
we know, this is the first time that a recurrent convolutional network
architecture has been proposed for multitemporal remote sensing image analysis.
The proposed network is validated on real multispectral data sets. Both visual
and quantitative analysis of experimental results demonstrates competitive
performance in the proposed mode
Self-supervised Multisensor Change Detection
Most change detection methods assume that pre-change and post-change images
are acquired by the same sensor. However, in many real-life scenarios, e.g.,
natural disaster, it is more practical to use the latest available images
before and after the occurrence of incidence, which may be acquired using
different sensors. In particular, we are interested in the combination of the
images acquired by optical and Synthetic Aperture Radar (SAR) sensors. SAR
images appear vastly different from the optical images even when capturing the
same scene. Adding to this, change detection methods are often constrained to
use only target image-pair, no labeled data, and no additional unlabeled data.
Such constraints limit the scope of traditional supervised machine learning and
unsupervised generative approaches for multi-sensor change detection. Recent
rapid development of self-supervised learning methods has shown that some of
them can even work with only few images. Motivated by this, in this work we
propose a method for multi-sensor change detection using only the unlabeled
target bi-temporal images that are used for training a network in
self-supervised fashion by using deep clustering and contrastive learning. The
proposed method is evaluated on four multi-modal bi-temporal scenes showing
change and the benefits of our self-supervised approach are demonstrated
- …