247 research outputs found
Capsule Networks for Hyperspectral Image Classification
Convolutional neural networks (CNNs) have recently exhibited an excellent performance in hyperspectral image classification tasks. However, the straightforward CNN-based network architecture still finds obstacles when effectively exploiting the relationships between hyperspectral imaging (HSI) features in the spectral-spatial domain, which is a key factor to deal with the high level of complexity present in remotely sensed HSI data. Despite the fact that deeper architectures try to mitigate these limitations, they also find challenges with the convergence of the network parameters, which eventually limit the classification performance under highly demanding scenarios. In this paper, we propose a new CNN architecture based on spectral-spatial capsule networks in order to achieve a highly accurate classification of HSIs while significantly reducing the network design complexity. Specifically, based on Hinton's capsule networks, we develop a CNN model extension that redefines the concept of capsule units to become spectral-spatial units specialized in classifying remotely sensed HSI data. The proposed model is composed by several building blocks, called spectral-spatial capsules, which are able to learn HSI spectral-spatial features considering their corresponding spatial positions in the scene, their associated spectral signatures, and also their possible transformations. Our experiments, conducted using five well-known HSI data sets and several state-of-the-art classification methods, reveal that our HSI classification approach based on spectral-spatial capsules is able to provide competitive advantages in terms of both classification accuracy and computational time
Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review
Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends
DALF: An AI Enabled Adversarial Framework for Classification of Hyperspectral Images
Hyperspectral image classification is very complex and challenging process. However, with deep neural networks like Convolutional Neural Networks (CNN) with explicit dimensionality reduction, the capability of classifier is greatly increased. However, there is still problem with sufficient training samples. In this paper, we overcome this problem by proposing an Artificial Intelligence (AI) based framework named Deep Adversarial Learning Framework (DALF) that exploits deep autoencoder for dimensionality reduction, Generative Adversarial Network (GAN) for generating new Hyperspectral Imaging (HSI) samples that are to be verified by a discriminator in a non-cooperative game setting besides using aclassifier. Convolutional Neural Network (CNN) is used for both generator and discriminator while classifier role is played by Support Vector Machine (SVM) and Neural Network (NN). An algorithm named Generative Model based Hybrid Approach for HSI Classification (GMHA-HSIC) which drives the functionality of the proposed framework is proposed. The success of DALF in accurate classification is largely dependent on the synthesis and labelling of spectra on regular basis. The synthetic samples made with an iterative process and being verified by discriminator result in useful spectra. By training GAN with associated deep learning models, the framework leverages classification performance. Our experimental results revealed that the proposed framework has potential to improve the state of the art besides having an effective data augmentation strategy
SpectralFormer: Rethinking Hyperspectral Image Classification with Transformers
Hyperspectral (HS) images are characterized by approximately contiguous
spectral information, enabling the fine identification of materials by
capturing subtle spectral discrepancies. Owing to their excellent locally
contextual modeling ability, convolutional neural networks (CNNs) have been
proven to be a powerful feature extractor in HS image classification. However,
CNNs fail to mine and represent the sequence attributes of spectral signatures
well due to the limitations of their inherent network backbone. To solve this
issue, we rethink HS image classification from a sequential perspective with
transformers, and propose a novel backbone network called \ul{SpectralFormer}.
Beyond band-wise representations in classic transformers, SpectralFormer is
capable of learning spectrally local sequence information from neighboring
bands of HS images, yielding group-wise spectral embeddings. More
significantly, to reduce the possibility of losing valuable information in the
layer-wise propagation process, we devise a cross-layer skip connection to
convey memory-like components from shallow to deep layers by adaptively
learning to fuse "soft" residuals across layers. It is worth noting that the
proposed SpectralFormer is a highly flexible backbone network, which can be
applicable to both pixel- and patch-wise inputs. We evaluate the classification
performance of the proposed SpectralFormer on three HS datasets by conducting
extensive experiments, showing the superiority over classic transformers and
achieving a significant improvement in comparison with state-of-the-art
backbone networks. The codes of this work will be available at
https://github.com/danfenghong/IEEE_TGRS_SpectralFormer for the sake of
reproducibility
Multiscale spatial-spectral convolutional network with image-based framework for hyperspectral imagery classification.
Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches as input classifier. This limits the range of use for spatial neighbor information and reduces processing efficiency in training and testing. To overcome this problem, we propose an image-based classification framework that is efficient and straight forward. Based on this framework, we propose a multiscale spatial-spectral CNN for HSIs (HyMSCN) to integrate both multiple receptive fields fused features and multiscale spatial features at different levels. The fused features are exploited using a lightweight block called the multiple receptive field feature block (MRFF), which contains various types of dilation convolution. By fusing multiple receptive field features and multiscale spatial features, the HyMSCN has comprehensive feature representation for classification. Experimental results from three real hyperspectral images prove the efficiency of the proposed framework. The proposed method also achieves superior performance for HSI classification
- …