1,084 research outputs found

    Multiscale spatial-spectral convolutional network with image-based framework for hyperspectral imagery classification.

    Get PDF
    Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches as input classifier. This limits the range of use for spatial neighbor information and reduces processing efficiency in training and testing. To overcome this problem, we propose an image-based classification framework that is efficient and straight forward. Based on this framework, we propose a multiscale spatial-spectral CNN for HSIs (HyMSCN) to integrate both multiple receptive fields fused features and multiscale spatial features at different levels. The fused features are exploited using a lightweight block called the multiple receptive field feature block (MRFF), which contains various types of dilation convolution. By fusing multiple receptive field features and multiscale spatial features, the HyMSCN has comprehensive feature representation for classification. Experimental results from three real hyperspectral images prove the efficiency of the proposed framework. The proposed method also achieves superior performance for HSI classification

    Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review

    Get PDF
    Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends

    Multi-Scale Hybrid Spectral Network for Feature Learning and Hyperspectral Image Classification

    Get PDF
    Hyperspectral image (HSI) classification is an important concern in remote sensing, but it is complex since few numbers of labelled training samples and the high-dimensional space with many spectral bands. Hence, it is essential to develop a more efficient neural network architecture to improve performance in the HSI classification task. Deep learning models are contemporary techniques for pixel-based hyperspectral image (HSI) classification. Deep feature extraction from both spatial and spectral channels has led to high classification accuracy. Meanwhile, the effectiveness of these spatial-spectral methods relies on the spatial dimension of every patch, and there is no feasible method to determine the best spatial dimension to take into consideration. It makes better sense to retrieve spatial properties through examination at different neighborhood scales in spatial dimensions. In this context, this paper presents a multi-scale hybrid spectral convolutional neural network (MS-HybSN) model that uses three distinct multi-scale spectral-spatial patches to pull out properties in spectral and spatial domains. The presented deep learning framework uses three patches of different sizes in spatial dimension to find these possible features. The process of Hybrid convolution operation (3D-2D) is done on each selected patch and is repeated throughout the image. To assess the effectiveness of the presented model, three benchmark datasets that are openly accessible (Pavia University, Indian Pines, and Salinas) and new Indian datasets (Ahmedabad-1 and Ahmedabad-2) are being used in experimental studies. Empirically, it has been demonstrated that the presented model succeeds over the remaining state-of-the-art approaches in terms of classification performance

    Llam-Mdcnet for Detecting Remote Sensing Images of Dead Tree Clusters

    Get PDF
    Clusters of dead trees are forest fires-prone. To maintain ecological balance and realize its protection, timely detection of dead trees in forest remote sensing images using existing computer vision methods is of great significance. Remote sensing images captured by Unmanned aerial vehicles (UAVs) typically have several issues, e.g., mixed distribution of adjacent but different tree classes, interference of redundant information, and high differences in scales of dead tree clusters, making the detection of dead tree clusters much more challenging. Therefore, based on the Multipath dense composite network (MDCN), an object detection method called LLAM-MDCNet is proposed in this paper. First, a feature extraction network called Multipath dense composite network is designed. The network\u27s multipath structure can substantially increase the extraction of underlying and semantic features to enhance its extraction capability for rich-information regions. Following that, in the row, column, and diagonal directions, the Longitude Latitude Attention Mechanism (LLAM) is presented and incorporated into the feature extraction network. The multi-directional LLAM facilitates the suppression of irrelevant and redundant information and improves the representation of high-level semantic feature information. Lastly, an AugFPN is employed for down-sampling, yielding a more comprehensive representation of image features with the combination of low-level texture features and high-level semantic information. Consequently, the network\u27s detection effect for dead tree cluster targets with high-scale differences is improved. Furthermore, we make the collected high-quality aerial dead tree cluster dataset containing 19,517 images shot by drones publicly available for other researchers to improve the work in this paper. Our proposed method achieved 87.25% mAP with an FPS of 66 on our dataset, demonstrating the effectiveness of the LLAM-MDCNet for detecting dead tree cluster targets in forest remote sensing images

    Multi-scale Adaptive Fusion Network for Hyperspectral Image Denoising

    Full text link
    Removing the noise and improving the visual quality of hyperspectral images (HSIs) is challenging in academia and industry. Great efforts have been made to leverage local, global or spectral context information for HSI denoising. However, existing methods still have limitations in feature interaction exploitation among multiple scales and rich spectral structure preservation. In view of this, we propose a novel solution to investigate the HSI denoising using a Multi-scale Adaptive Fusion Network (MAFNet), which can learn the complex nonlinear mapping between clean and noisy HSI. Two key components contribute to improving the hyperspectral image denoising: A progressively multiscale information aggregation network and a co-attention fusion module. Specifically, we first generate a set of multiscale images and feed them into a coarse-fusion network to exploit the contextual texture correlation. Thereafter, a fine fusion network is followed to exchange the information across the parallel multiscale subnetworks. Furthermore, we design a co-attention fusion module to adaptively emphasize informative features from different scales, and thereby enhance the discriminative learning capability for denoising. Extensive experiments on synthetic and real HSI datasets demonstrate that the proposed MAFNet has achieved better denoising performance than other state-of-the-art techniques. Our codes are available at \verb'https://github.com/summitgao/MAFNet'.Comment: IEEE JSTASRS 2023, code at: https://github.com/summitgao/MAFNe

    Spectral-spatial self-attention networks for hyperspectral image classification.

    Get PDF
    This study presents a spectral-spatial self-attention network (SSSAN) for classification of hyperspectral images (HSIs), which can adaptively integrate local features with long-range dependencies related to the pixel to be classified. Specifically, it has two subnetworks. The spatial subnetwork introduces the proposed spatial self-attention module to exploit rich patch-based contextual information related to the center pixel. The spectral subnetwork introduces the proposed spectral self-attention module to exploit the long-range spectral correlation over local spectral features. The extracted spectral and spatial features are then adaptively fused for HSI classification. Experiments conducted on four HSI datasets demonstrate that the proposed network outperforms several state-of-the-art methods

    Application of Convolutional Neural Network in the Segmentation and Classification of High-Resolution Remote Sensing Images

    Get PDF
    Numerous convolution neural networks increase accuracy of classification for remote sensing scene images at the expense of the models space and time sophistication This causes the model to run slowly and prevents the realization of a trade-off among model accuracy and running time The loss of deep characteristics as the network gets deeper makes it impossible to retrieve the key aspects with a sample double branching structure which is bad for classifying remote sensing scene photo
    • …
    corecore