8 research outputs found
Classification of Hyperspectral and LiDAR Data Using Coupled CNNs
In this paper, we propose an efficient and effective framework to fuse
hyperspectral and Light Detection And Ranging (LiDAR) data using two coupled
convolutional neural networks (CNNs). One CNN is designed to learn
spectral-spatial features from hyperspectral data, and the other one is used to
capture the elevation information from LiDAR data. Both of them consist of
three convolutional layers, and the last two convolutional layers are coupled
together via a parameter sharing strategy. In the fusion phase, feature-level
and decision-level fusion methods are simultaneously used to integrate these
heterogeneous features sufficiently. For the feature-level fusion, three
different fusion strategies are evaluated, including the concatenation
strategy, the maximization strategy, and the summation strategy. For the
decision-level fusion, a weighted summation strategy is adopted, where the
weights are determined by the classification accuracy of each output. The
proposed model is evaluated on an urban data set acquired over Houston, USA,
and a rural one captured over Trento, Italy. On the Houston data, our model can
achieve a new record overall accuracy of 96.03%. On the Trento data, it
achieves an overall accuracy of 99.12%. These results sufficiently certify the
effectiveness of our proposed model
Spatial-Spectral Manifold Embedding of Hyperspectral Data
In recent years, hyperspectral imaging, also known as imaging spectroscopy,
has been paid an increasing interest in geoscience and remote sensing
community. Hyperspectral imagery is characterized by very rich spectral
information, which enables us to recognize the materials of interest lying on
the surface of the Earth more easier. We have to admit, however, that high
spectral dimension inevitably brings some drawbacks, such as expensive data
storage and transmission, information redundancy, etc. Therefore, to reduce the
spectral dimensionality effectively and learn more discriminative spectral
low-dimensional embedding, in this paper we propose a novel hyperspectral
embedding approach by simultaneously considering spatial and spectral
information, called spatial-spectral manifold embedding (SSME). Beyond the
pixel-wise spectral embedding approaches, SSME models the spatial and spectral
information jointly in a patch-based fashion. SSME not only learns the spectral
embedding by using the adjacency matrix obtained by similarity measurement
between spectral signatures, but also models the spatial neighbours of a target
pixel in hyperspectral scene by sharing the same weights (or edges) in the
process of learning embedding. Classification is explored as a potential
strategy to quantitatively evaluate the performance of learned embedding
representations. Classification is explored as a potential application for
quantitatively evaluating the performance of these hyperspectral embedding
algorithms. Extensive experiments conducted on the widely-used hyperspectral
datasets demonstrate the superiority and effectiveness of the proposed SSME as
compared to several state-of-the-art embedding methods
Spatial-Spectral Manifold Embedding of Hyperspectral Data
In recent years, hyperspectral imaging, also known as imaging spectroscopy, has been paid an increasing interest in geoscience and
remote sensing community. Hyperspectral imagery is characterized by very rich spectral information, which enables us to recognize the materials of interest lying on the surface of the Earth more easier. We have to admit, however, that high spectral dimension inevitably brings some drawbacks, such as expensive data storage and transmission, information redundancy, etc. Therefore, to reduce the spectral dimensionality effectively and learn more discriminative spectral low-dimensional embedding, in this paper we propose a novel hyperspectral embedding approach by simultaneously considering spatial and spectral information, called spatialspectral manifold embedding (SSME). Beyond the pixel-wise spectral embedding approaches, SSME models the spatial and spectral information jointly in a patch-based fashion. SSME not only learns the spectral embedding by using the adjacency matrix obtained by similarity measurement between spectral signatures, but also models the spatial neighbours of a target pixel in hyperspectral scene by sharing the same weights (or edges) in the process of learning embedding. Classification is explored as a potential strategy
to quantitatively evaluate the performance of learned embedding representations. Classification is explored as a potential application for quantitatively evaluating the performance of these hyperspectral embedding algorithms. Extensive experiments conducted on the widely-used hyperspectral datasets demonstrate the superiority and effectiveness of the proposed SSME as compared to several state-of-the-art embedding methods
Spectral Superresolution of Multispectral Imagery with Joint Sparse and Low-Rank Learning
Extensive attention has been widely paid to enhance the spatial resolution of
hyperspectral (HS) images with the aid of multispectral (MS) images in remote
sensing. However, the ability in the fusion of HS and MS images remains to be
improved, particularly in large-scale scenes, due to the limited acquisition of
HS images. Alternatively, we super-resolve MS images in the spectral domain by
the means of partially overlapped HS images, yielding a novel and promising
topic: spectral superresolution (SSR) of MS imagery. This is challenging and
less investigated task due to its high ill-posedness in inverse imaging. To
this end, we develop a simple but effective method, called joint sparse and
low-rank learning (J-SLoL), to spectrally enhance MS images by jointly learning
low-rank HS-MS dictionary pairs from overlapped regions. J-SLoL infers and
recovers the unknown hyperspectral signals over a larger coverage by sparse
coding on the learned dictionary pair. Furthermore, we validate the SSR
performance on three HS-MS datasets (two for classification and one for
unmixing) in terms of reconstruction, classification, and unmixing by comparing
with several existing state-of-the-art baselines, showing the effectiveness and
superiority of the proposed J-SLoL algorithm. Furthermore, the codes and
datasets will be available at:
https://github.com/danfenghong/IEEE\_TGRS\_J-SLoL, contributing to the RS
community
Coupled Convolutional Neural Network with Adaptive Response Function Learning for Unsupervised Hyperspectral Super-Resolution
Due to the limitations of hyperspectral imaging systems, hyperspectral
imagery (HSI) often suffers from poor spatial resolution, thus hampering many
applications of the imagery. Hyperspectral super-resolution refers to fusing
HSI and MSI to generate an image with both high spatial and high spectral
resolutions. Recently, several new methods have been proposed to solve this
fusion problem, and most of these methods assume that the prior information of
the Point Spread Function (PSF) and Spectral Response Function (SRF) are known.
However, in practice, this information is often limited or unavailable. In this
work, an unsupervised deep learning-based fusion method - HyCoNet - that can
solve the problems in HSI-MSI fusion without the prior PSF and SRF information
is proposed. HyCoNet consists of three coupled autoencoder nets in which the
HSI and MSI are unmixed into endmembers and abundances based on the linear
unmixing model. Two special convolutional layers are designed to act as a
bridge that coordinates with the three autoencoder nets, and the PSF and SRF
parameters are learned adaptively in the two convolution layers during the
training process. Furthermore, driven by the joint loss function, the proposed
method is straightforward and easily implemented in an end-to-end training
manner. The experiments performed in the study demonstrate that the proposed
method performs well and produces robust results for different datasets and
arbitrary PSFs and SRFs
Novel intelligent spatiotemporal grid earthquake early-warning model
The integration analysis of multi-type geospatial information poses challenges to existing spatiotemporal data organization models and analysis models based on deep learning. For earthquake early warning, this study proposes a novel intelligent spatiotemporal grid model based on GeoSOT (SGMG-EEW) for feature fusion of multi-type geospatial data. This model includes a seismic grid sample model (SGSM) and a spatiotemporal grid model based on a three-dimensional group convolution neural network (3DGCNN-SGM). The SGSM solves the problem concerning that the layers of different data types cannot form an ensemble with a consistent data structure and transforms the grid representation of data into grid samples for deep learning. The 3DGCNN-SGM is the first application of group convolution in the deep learning of multi-source geographic information data. It avoids direct superposition calculation of data between different layers, which may negatively affect the deep learning analysis model results. In this study, taking the atmospheric temperature anomaly and historical earthquake precursory data from Japan as an example, an earthquake early warning verification experiment was conducted based on the proposed SGMG-EEW. Five groups of control experiments were designed, namely with the use of atmospheric temperature anomaly data only, use of historical earthquake data only, a non-group convolution control group, a support vector machine control group, and a seismic statistical analysis control group. The results showed that the proposed SGSM is not only compatible with the expression of a single type of spatiotemporal data but can also support multiple types of spatiotemporal data, forming a deep-learning-oriented data structure. Compared with the traditional deep learning model, the proposed 3DGCNN-SGM is more suitable for the integration analysis of multiple types of spatiotemporal data.</jats:p