11 research outputs found

    PCA-Domain Fused Singular Spectral Analysis for Fast and Noise-Robust Spectral-Spatial Feature Mining in Hyperspectral Classification

    Get PDF
    The principal component analysis (PCA) and 2-D singular spectral analysis (2DSSA) are widely used for spectral- and spatial-domain feature extraction in hyperspectral images (HSIs). However, PCA itself suffers from low efficacy if no spatial information is combined, while 2DSSA can extract the spatial information yet has a high computing complexity. As a result, we propose in this letter a PCA domain 2DSSA approach for spectral-spatial feature mining in HSI. Specifically, PCA and its variation, folded PCA (FPCA) are fused with the 2DSSA, as FPCA can extract both global and local spectral features. By applying 2DSSA only on a small number of PCA components, the overall computational cost can be significantly reduced while preserving the discrimination ability of the features. In addition, with the effective fusion of spectral and spatial features, our approach can work well on the uncorrected dataset without removing the noisy and water absorption bands, even under a small number of training samples. Experiments on two publicly available datasets have fully validated the superiority of the proposed approach, in comparison to several state-of-the-art methods and deep learning models.</p

    Deep feature fusion via two-stream convolutional neural network for hyperspectral image classification

    Get PDF
    The representation power of convolutional neural network (CNN) models for hyperspectral image (HSI) analysis is in practice limited by the available amount of the labeled samples, which is often insufficient to sustain deep networks with many parameters. We propose a novel approach to boost the network representation power with a two-stream 2-D CNN architecture. The proposed method extracts simultaneously, the spectral features and local spatial and global spatial features, with two 2-D CNN networks and makes use of channel correlations to identify the most informative features. Moreover, we propose a layer-specific regularization and a smooth normalization fusion scheme to adaptively learn the fusion weights for the spectral-spatial features from the two parallel streams. An important asset of our model is the simultaneous training of the feature extraction, fusion, and classification processes with the same cost function. Experimental results on several hyperspectral data sets demonstrate the efficacy of the proposed method compared with the state-of-the-art methods in the field

    Spectral-Spatial Graph Reasoning Network for Hyperspectral Image Classification

    Full text link
    In this paper, we propose a spectral-spatial graph reasoning network (SSGRN) for hyperspectral image (HSI) classification. Concretely, this network contains two parts that separately named spatial graph reasoning subnetwork (SAGRN) and spectral graph reasoning subnetwork (SEGRN) to capture the spatial and spectral graph contexts, respectively. Different from the previous approaches implementing superpixel segmentation on the original image or attempting to obtain the category features under the guide of label image, we perform the superpixel segmentation on intermediate features of the network to adaptively produce the homogeneous regions to get the effective descriptors. Then, we adopt a similar idea in spectral part that reasonably aggregating the channels to generate spectral descriptors for spectral graph contexts capturing. All graph reasoning procedures in SAGRN and SEGRN are achieved through graph convolution. To guarantee the global perception ability of the proposed methods, all adjacent matrices in graph reasoning are obtained with the help of non-local self-attention mechanism. At last, by combining the extracted spatial and spectral graph contexts, we obtain the SSGRN to achieve a high accuracy classification. Extensive quantitative and qualitative experiments on three public HSI benchmarks demonstrate the competitiveness of the proposed methods compared with other state-of-the-art approaches

    Large kernel spectral and spatial attention networks for hyperspectral image classification.

    Get PDF
    Currently, long-range spectral and spatial dependencies have been widely demonstrated to be essential for hyperspectral image (HSI) classification. Due to the transformer superior ability to exploit long-range representations, the transformer-based methods have exhibited enormous potential. However, existing transformer-based approaches still face two crucial issues that hinder the further performance promotion of HSI classification: 1) treating HSI as 1D sequences neglects spatial properties of HSI, 2) the dependence between spectral and spatial information is not fully considered. To tackle the above problems, a large kernel spectral-spatial attention network (LKSSAN) is proposed to capture the long-range 3D properties of HSI, which is inspired by the visual attention network (VAN). Specifically, a spectral-spatial attention module is first proposed to effectively exploit discriminative 3D spectral-spatial features while keeping the 3D structure of HSI. This module introduces the large kernel attention (LKA) and convolution feed-forward (CFF) to flexibly emphasize, model, and exploit the long-range 3D feature dependencies with lower computational pressure. Finally, the features from the spectral-spatial attention module are fed into the classification module for the optimization of 3D spectral-spatial representation. To verify the effectiveness of the proposed classification method, experiments are executed on four widely used HSI data sets. The experiments demonstrate that LKSSAN is indeed an effective way for long-range 3D feature extraction of HSI

    A robust dynamic classifier selection approach for hyperspectral images with imprecise label information

    Get PDF
    Supervised hyperspectral image (HSI) classification relies on accurate label information. However, it is not always possible to collect perfectly accurate labels for training samples. This motivates the development of classifiers that are sufficiently robust to some reasonable amounts of errors in data labels. Despite the growing importance of this aspect, it has not been sufficiently studied in the literature yet. In this paper, we analyze the effect of erroneous sample labels on probability distributions of the principal components of HSIs, and provide in this way a statistical analysis of the resulting uncertainty in classifiers. Building on the theory of imprecise probabilities, we develop a novel robust dynamic classifier selection (R-DCS) model for data classification with erroneous labels. Particularly, spectral and spatial features are extracted from HSIs to construct two individual classifiers for the dynamic selection, respectively. The proposed R-DCS model is based on the robustness of the classifiers’ predictions: the extent to which a classifier can be altered without changing its prediction. We provide three possible selection strategies for the proposed model with different computational complexities and apply them on three benchmark data sets. Experimental results demonstrate that the proposed model outperforms the individual classifiers it selects from and is more robust to errors in labels compared to widely adopted approaches

    Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review

    Get PDF
    Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends
    corecore