313 research outputs found
Dynamical spectral unmixing of multitemporal hyperspectral images
In this paper, we consider the problem of unmixing a time series of
hyperspectral images. We propose a dynamical model based on linear mixing
processes at each time instant. The spectral signatures and fractional
abundances of the pure materials in the scene are seen as latent variables, and
assumed to follow a general dynamical structure. Based on a simplified version
of this model, we derive an efficient spectral unmixing algorithm to estimate
the latent variables by performing alternating minimizations. The performance
of the proposed approach is demonstrated on synthetic and real multitemporal
hyperspectral images.Comment: 13 pages, 10 figure
GETNET: A General End-to-end Two-dimensional CNN Framework for Hyperspectral Image Change Detection
Change detection (CD) is an important application of remote sensing, which
provides timely change information about large-scale Earth surface. With the
emergence of hyperspectral imagery, CD technology has been greatly promoted, as
hyperspectral data with the highspectral resolution are capable of detecting
finer changes than using the traditional multispectral imagery. Nevertheless,
the high dimension of hyperspectral data makes it difficult to implement
traditional CD algorithms. Besides, endmember abundance information at subpixel
level is often not fully utilized. In order to better handle high dimension
problem and explore abundance information, this paper presents a General
End-to-end Two-dimensional CNN (GETNET) framework for hyperspectral image
change detection (HSI-CD). The main contributions of this work are threefold:
1) Mixed-affinity matrix that integrates subpixel representation is introduced
to mine more cross-channel gradient features and fuse multi-source information;
2) 2-D CNN is designed to learn the discriminative features effectively from
multi-source data at a higher level and enhance the generalization ability of
the proposed CD algorithm; 3) A new HSI-CD data set is designed for the
objective comparison of different methods. Experimental results on real
hyperspectral data sets demonstrate the proposed method outperforms most of the
state-of-the-arts
Kalman Filtering and Expectation Maximization for Multitemporal Spectral Unmixing
The recent evolution of hyperspectral imaging technology and the
proliferation of new emerging applications presses for the processing of
multiple temporal hyperspectral images. In this work, we propose a novel
spectral unmixing (SU) strategy using physically motivated parametric endmember
representations to account for temporal spectral variability. By representing
the multitemporal mixing process using a state-space formulation, we are able
to exploit the Bayesian filtering machinery to estimate the endmember
variability coefficients. Moreover, by assuming that the temporal variability
of the abundances is small over short intervals, an efficient implementation of
the expectation maximization (EM) algorithm is employed to estimate the
abundances and the other model parameters. Simulation results indicate that the
proposed strategy outperforms state-of-the-art multitemporal SU algorithms
Online Graph-Based Change Point Detection in Multiband Image Sequences
The automatic detection of changes or anomalies between multispectral and
hyperspectral images collected at different time instants is an active and
challenging research topic. To effectively perform change-point detection in
multitemporal images, it is important to devise techniques that are
computationally efficient for processing large datasets, and that do not
require knowledge about the nature of the changes. In this paper, we introduce
a novel online framework for detecting changes in multitemporal remote sensing
images. Acting on neighboring spectra as adjacent vertices in a graph, this
algorithm focuses on anomalies concurrently activating groups of vertices
corresponding to compact, well-connected and spectrally homogeneous image
regions. It fully benefits from recent advances in graph signal processing to
exploit the characteristics of the data that lie on irregular supports.
Moreover, the graph is estimated directly from the images using superpixel
decomposition algorithms. The learning algorithm is scalable in the sense that
it is efficient and spatially distributed. Experiments illustrate the detection
and localization performance of the method
Dynamical Hyperspectral Unmixing with Variational Recurrent Neural Networks
Multitemporal hyperspectral unmixing (MTHU) is a fundamental tool in the
analysis of hyperspectral image sequences. It reveals the dynamical evolution
of the materials (endmembers) and of their proportions (abundances) in a given
scene. However, adequately accounting for the spatial and temporal variability
of the endmembers in MTHU is challenging, and has not been fully addressed so
far in unsupervised frameworks. In this work, we propose an unsupervised MTHU
algorithm based on variational recurrent neural networks. First, a stochastic
model is proposed to represent both the dynamical evolution of the endmembers
and their abundances, as well as the mixing process. Moreover, a new model
based on a low-dimensional parametrization is used to represent spatial and
temporal endmember variability, significantly reducing the amount of variables
to be estimated. We propose to formulate MTHU as a Bayesian inference problem.
However, the solution to this problem does not have an analytical solution due
to the nonlinearity and non-Gaussianity of the model. Thus, we propose a
solution based on deep variational inference, in which the posterior
distribution of the estimated abundances and endmembers is represented by using
a combination of recurrent neural networks and a physically motivated model.
The parameters of the model are learned using stochastic backpropagation.
Experimental results show that the proposed method outperforms state of the art
MTHU algorithms
Multisource and Multitemporal Data Fusion in Remote Sensing
The sharp and recent increase in the availability of data captured by
different sensors combined with their considerably heterogeneous natures poses
a serious challenge for the effective and efficient processing of remotely
sensed data. Such an increase in remote sensing and ancillary datasets,
however, opens up the possibility of utilizing multimodal datasets in a joint
manner to further improve the performance of the processing approaches with
respect to the application at hand. Multisource data fusion has, therefore,
received enormous attention from researchers worldwide for a wide variety of
applications. Moreover, thanks to the revisit capability of several spaceborne
sensors, the integration of the temporal information with the spatial and/or
spectral/backscattering information of the remotely sensed data is possible and
helps to move from a representation of 2D/3D data to 4D data structures, where
the time variable adds new information as well as challenges for the
information extraction algorithms. There are a huge number of research works
dedicated to multisource and multitemporal data fusion, but the methods for the
fusion of different modalities have expanded in different paths according to
each research community. This paper brings together the advances of multisource
and multitemporal data fusion approaches with respect to different research
communities and provides a thorough and discipline-specific starting point for
researchers at different levels (i.e., students, researchers, and senior
researchers) willing to conduct novel investigations on this challenging topic
by supplying sufficient detail and references
Non-local tensor completion for multitemporal remotely sensed images inpainting
Remotely sensed images may contain some missing areas because of poor weather
conditions and sensor failure. Information of those areas may play an important
role in the interpretation of multitemporal remotely sensed data. The paper
aims at reconstructing the missing information by a non-local low-rank tensor
completion method (NL-LRTC). First, nonlocal correlations in the spatial domain
are taken into account by searching and grouping similar image patches in a
large search window. Then low-rankness of the identified 4-order tensor groups
is promoted to consider their correlations in spatial, spectral, and temporal
domains, while reconstructing the underlying patterns. Experimental results on
simulated and real data demonstrate that the proposed method is effective both
qualitatively and quantitatively. In addition, the proposed method is
computationally efficient compared to other patch based methods such as the
recent proposed PM-MTGSR method
- …