284 research outputs found
Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification
This paper proposes a novel deep learning framework named
bidirectional-convolutional long short term memory (Bi-CLSTM) network to
automatically learn the spectral-spatial feature from hyperspectral images
(HSIs). In the network, the issue of spectral feature extraction is considered
as a sequence learning problem, and a recurrent connection operator across the
spectral domain is used to address it. Meanwhile, inspired from the widely used
convolutional neural network (CNN), a convolution operator across the spatial
domain is incorporated into the network to extract the spatial feature.
Besides, to sufficiently capture the spectral information, a bidirectional
recurrent connection is proposed. In the classification phase, the learned
features are concatenated into a vector and fed to a softmax classifier via a
fully-connected operator. To validate the effectiveness of the proposed
Bi-CLSTM framework, we compare it with several state-of-the-art methods,
including the CNN framework, on three widely used HSIs. The obtained results
show that Bi-CLSTM can improve the classification performance as compared to
other methods
Hybrid CNN Bi-LSTM neural network for Hyperspectral image classification
Hyper spectral images have drawn the attention of the researchers for its
complexity to classify. It has nonlinear relation between the materials and the
spectral information provided by the HSI image. Deep learning methods have
shown superiority in learning this nonlinearity in comparison to traditional
machine learning methods. Use of 3-D CNN along with 2-D CNN have shown great
success for learning spatial and spectral features. However, it uses
comparatively large number of parameters. Moreover, it is not effective to
learn inter layer information. Hence, this paper proposes a neural network
combining 3-D CNN, 2-D CNN and Bi-LSTM. The performance of this model has been
tested on Indian Pines(IP) University of Pavia(PU) and Salinas Scene(SA) data
sets. The results are compared with the state of-the-art deep learning-based
models. This model performed better in all three datasets. It could achieve
99.83, 99.98 and 100 percent accuracy using only 30 percent trainable
parameters of the state-of-art model in IP, PU and SA datasets respectively
CBANet: an end-to-end cross band 2-D attention network for hyperspectral change detection in remote sensing.
As a fundamental task in remote sensing observation of the earth, change detection using hyperspectral images (HSI) features high accuracy due to the combination of the rich spectral and spatial information, especially for identifying land-cover variations in bi-temporal HSIs. Relying on the image difference, existing HSI change detection methods fail to preserve the spectral characteristics and suffer from high data dimensionality, making them extremely challenging to deal with changing areas of various sizes. To tackle these challenges, we propose a cross-band 2-D self-attention Network (CBANet) for end-to-end HSI change detection. By embedding a cross-band feature extraction module into a 2-D spatial-spectral self-attention module, CBANet is highly capable of extracting the spectral difference of matching pixels by considering the correlation between adjacent pixels. The CBANet has shown three key advantages: 1) less parameters and high efficiency; 2) high efficacy of extracting representative spectral information from bi-temporal images; and 3) high stability and accuracy for identifying both sparse sporadic changing pixels and large changing areas whilst preserving the edges. Comprehensive experiments on three publicly available datasets have fully validated the efficacy and efficiency of the proposed methodology
Multi-scale diff-changed feature fusion network for hyperspectral image change detection.
For hyperspectral images (HSI) change detection (CD), multi-scale features are usually used to construct the detection models. However, the existing studies only consider the multi-scale features containing changed and unchanged components, which is difficult to represent the subtle changes between bi-temporal HSIs in each scale. To address this problem, we propose a multi-scale diff-changed feature fusion network (MSDFFN) for HSI CD, which improves the ability of feature representation by learning the refined change components between bi-temporal HSIs under different scales. In this network, a temporal feature encoder-decoder sub-network, which combines a reduced inception module and a cross-layer attention module to highlight the significant features, is designed to extract the temporal features of HSIs. A bidirectional diff-changed feature representation module is proposed to learn the fine changed features of bi-temporal HSIs at various scales to enhance the discriminative performance of the subtle change. A multi-scale attention fusion module is developed to adaptively fuse the changed features of various scales. The proposed method can not only discover the subtle change of bi-temporal HSIs but also improve the discriminating power for HSI CD. Experimental results on three HSI datasets show that MSDFFN outperforms a few state-of-the-art methods
Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects
Hyperspectral Imaging (HSI) has been extensively utilized in many real-life
applications because it benefits from the detailed spectral information
contained in each pixel. Notably, the complex characteristics i.e., the
nonlinear relation among the captured spectral information and the
corresponding object of HSI data make accurate classification challenging for
traditional methods. In the last few years, Deep Learning (DL) has been
substantiated as a powerful feature extractor that effectively addresses the
nonlinear problems that appeared in a number of computer vision tasks. This
prompts the deployment of DL for HSI classification (HSIC) which revealed good
performance. This survey enlists a systematic overview of DL for HSIC and
compared state-of-the-art strategies of the said topic. Primarily, we will
encapsulate the main challenges of traditional machine learning for HSIC and
then we will acquaint the superiority of DL to address these problems. This
survey breakdown the state-of-the-art DL frameworks into spectral-features,
spatial-features, and together spatial-spectral features to systematically
analyze the achievements (future research directions as well) of these
frameworks for HSIC. Moreover, we will consider the fact that DL requires a
large number of labeled training examples whereas acquiring such a number for
HSIC is challenging in terms of time and cost. Therefore, this survey discusses
some strategies to improve the generalization performance of DL strategies
which can provide some future guidelines
Bidirectional recurrent imputation and abundance estimation of LULC classes with MODIS multispectral time series and geo-topographic and climatic data
Remotely sensed data are dominated by mixed Land Use and Land Cover (LULC)
types. Spectral unmixing (SU) is a key technique that disentangles mixed pixels
into constituent LULC types and their abundance fractions. While existing
studies on Deep Learning (DL) for SU typically focus on single time-step
hyperspectral (HS) or multispectral (MS) data, our work pioneers SU using MODIS
MS time series, addressing missing data with end-to-end DL models. Our approach
enhances a Long-Short Term Memory (LSTM)-based model by incorporating
geographic, topographic (geo-topographic), and climatic ancillary information.
Notably, our method eliminates the need for explicit endmember extraction,
instead learning the input-output relationship between mixed spectra and LULC
abundances through supervised learning. Experimental results demonstrate that
integrating spectral-temporal input data with geo-topographic and climatic
information significantly improves the estimation of LULC abundances in mixed
pixels. To facilitate this study, we curated a novel labeled dataset for
Andalusia (Spain) with monthly MODIS multispectral time series at 460m
resolution for 2013. Named Andalusia MultiSpectral MultiTemporal Unmixing
(Andalusia-MSMTU), this dataset provides pixel-level annotations of LULC
abundances along with ancillary information. The dataset
(https://zenodo.org/records/7752348) and code
(https://github.com/jrodriguezortega/MSMTU) are available to the public
- …