851 research outputs found

    Multiscale spatial-spectral convolutional network with image-based framework for hyperspectral imagery classification.

    Get PDF
    Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches as input classifier. This limits the range of use for spatial neighbor information and reduces processing efficiency in training and testing. To overcome this problem, we propose an image-based classification framework that is efficient and straight forward. Based on this framework, we propose a multiscale spatial-spectral CNN for HSIs (HyMSCN) to integrate both multiple receptive fields fused features and multiscale spatial features at different levels. The fused features are exploited using a lightweight block called the multiple receptive field feature block (MRFF), which contains various types of dilation convolution. By fusing multiple receptive field features and multiscale spatial features, the HyMSCN has comprehensive feature representation for classification. Experimental results from three real hyperspectral images prove the efficiency of the proposed framework. The proposed method also achieves superior performance for HSI classification

    CBANet: an end-to-end cross band 2-D attention network for hyperspectral change detection in remote sensing.

    Get PDF
    As a fundamental task in remote sensing observation of the earth, change detection using hyperspectral images (HSI) features high accuracy due to the combination of the rich spectral and spatial information, especially for identifying land-cover variations in bi-temporal HSIs. Relying on the image difference, existing HSI change detection methods fail to preserve the spectral characteristics and suffer from high data dimensionality, making them extremely challenging to deal with changing areas of various sizes. To tackle these challenges, we propose a cross-band 2-D self-attention Network (CBANet) for end-to-end HSI change detection. By embedding a cross-band feature extraction module into a 2-D spatial-spectral self-attention module, CBANet is highly capable of extracting the spectral difference of matching pixels by considering the correlation between adjacent pixels. The CBANet has shown three key advantages: 1) less parameters and high efficiency; 2) high efficacy of extracting representative spectral information from bi-temporal images; and 3) high stability and accuracy for identifying both sparse sporadic changing pixels and large changing areas whilst preserving the edges. Comprehensive experiments on three publicly available datasets have fully validated the efficacy and efficiency of the proposed methodology

    Application of Convolutional Neural Network in the Segmentation and Classification of High-Resolution Remote Sensing Images

    Get PDF
    Numerous convolution neural networks increase accuracy of classification for remote sensing scene images at the expense of the models space and time sophistication This causes the model to run slowly and prevents the realization of a trade-off among model accuracy and running time The loss of deep characteristics as the network gets deeper makes it impossible to retrieve the key aspects with a sample double branching structure which is bad for classifying remote sensing scene photo

    Large kernel spectral and spatial attention networks for hyperspectral image classification.

    Get PDF
    Currently, long-range spectral and spatial dependencies have been widely demonstrated to be essential for hyperspectral image (HSI) classification. Due to the transformer superior ability to exploit long-range representations, the transformer-based methods have exhibited enormous potential. However, existing transformer-based approaches still face two crucial issues that hinder the further performance promotion of HSI classification: 1) treating HSI as 1D sequences neglects spatial properties of HSI, 2) the dependence between spectral and spatial information is not fully considered. To tackle the above problems, a large kernel spectral-spatial attention network (LKSSAN) is proposed to capture the long-range 3D properties of HSI, which is inspired by the visual attention network (VAN). Specifically, a spectral-spatial attention module is first proposed to effectively exploit discriminative 3D spectral-spatial features while keeping the 3D structure of HSI. This module introduces the large kernel attention (LKA) and convolution feed-forward (CFF) to flexibly emphasize, model, and exploit the long-range 3D feature dependencies with lower computational pressure. Finally, the features from the spectral-spatial attention module are fed into the classification module for the optimization of 3D spectral-spatial representation. To verify the effectiveness of the proposed classification method, experiments are executed on four widely used HSI data sets. The experiments demonstrate that LKSSAN is indeed an effective way for long-range 3D feature extraction of HSI

    Low-Light Hyperspectral Image Enhancement

    Full text link
    Due to inadequate energy captured by the hyperspectral camera sensor in poor illumination conditions, low-light hyperspectral images (HSIs) usually suffer from low visibility, spectral distortion, and various noises. A range of HSI restoration methods have been developed, yet their effectiveness in enhancing low-light HSIs is constrained. This work focuses on the low-light HSI enhancement task, which aims to reveal the spatial-spectral information hidden in darkened areas. To facilitate the development of low-light HSI processing, we collect a low-light HSI (LHSI) dataset of both indoor and outdoor scenes. Based on Laplacian pyramid decomposition and reconstruction, we developed an end-to-end data-driven low-light HSI enhancement (HSIE) approach trained on the LHSI dataset. With the observation that illumination is related to the low-frequency component of HSI, while textural details are closely correlated to the high-frequency component, the proposed HSIE is designed to have two branches. The illumination enhancement branch is adopted to enlighten the low-frequency component with reduced resolution. The high-frequency refinement branch is utilized for refining the high-frequency component via a predicted mask. In addition, to improve information flow and boost performance, we introduce an effective channel attention block (CAB) with residual dense connection, which served as the basic block of the illumination enhancement branch. The effectiveness and efficiency of HSIE both in quantitative assessment measures and visual effects are demonstrated by experimental results on the LHSI dataset. According to the classification performance on the remote sensing Indian Pines dataset, downstream tasks benefit from the enhanced HSI. Datasets and codes are available: \href{https://github.com/guanguanboy/HSIE}{https://github.com/guanguanboy/HSIE}

    Dynamic Convolution Self-Attention Network for Land-Cover Classification in VHR Remote-Sensing Images

    Get PDF
    The current deep convolutional neural networks for very-high-resolution (VHR) remote-sensing image land-cover classification often suffer from two challenges. First, the feature maps extracted by network encoders based on vanilla convolution usually contain a lot of redundant information, which easily causes misclassification of land cover. Moreover, these encoders usually require a large number of parameters and high computational costs. Second, as remote-sensing images are complex and contain many objects with large-scale variances, it is difficult to use the popular feature fusion modules to improve the representation ability of networks. To address the above issues, we propose a dynamic convolution self-attention network (DCSA-Net) for VHR remote-sensing image land-cover classification. The proposed network has two advantages. On one hand, we designed a lightweight dynamic convolution module (LDCM) by using dynamic convolution and a self-attention mechanism. This module can extract more useful image features than vanilla convolution, avoiding the negative effect of useless feature maps on land-cover classification. On the other hand, we designed a context information aggregation module (CIAM) with a ladder structure to enlarge the receptive field. This module can aggregate multi-scale contexture information from feature maps with different resolutions using a dense connection. Experiment results show that the proposed DCSA-Net is superior to state-of-the-art networks due to higher accuracy of land-cover classification, fewer parameters, and lower computational cost. The source code is made public available.National Natural Science Foundation of China (Program No. 61871259, 62271296, 61861024), in part by Natural Science Basic Research Program of Shaanxi (Program No. 2021JC-47), in part by Key Research and Development Program of Shaanxi (Program No. 2022GY-436, 2021ZDLGY08-07), in part by Natural Science Basic Research Program of Shaanxi (Program No. 2022JQ-634, 2022JQ-018), and in part by Shaanxi Joint Laboratory of Artificial Intelligence (No. 2020SS-03)
    • …
    corecore