50 research outputs found

    SSA-SiamNet:Spectral-Spatial-Wise Attention-Based Siamese Network for Hyperspectral Image Change Detection

    Get PDF
    Deep learning methods, especially convolutional neural network (CNN)-based methods, have shown promising performance for hyperspectral image (HSI) change detection (CD). It is acknowledged widely that different spectral channels and spatial locations in input image patches may contribute differently to CD. However, they are treated equally in existing CNN-based approaches. To increase the accuracy of HSI CD, we propose an end-to-end Siamese CNN (SiamNet) with a spectral-spatial-wise attention (SSA-SiamNet) mechanism. The proposed SSA-SiamNet method can emphasize informative channels and locations and suppress less informative ones to refine the spectral-spatial features adaptively. Moreover, in the network training phase, the weighted contrastive loss function is used for more reliable separation of changed and unchanged pixels and to accelerate the convergence of the network. SSA-SiamNet was validated using four groups of bitemporal HSIs. The accuracy of CD using the SSA-SiamNet was found to be consistently greater than for ten benchmark methods

    CBANet: an end-to-end cross band 2-D attention network for hyperspectral change detection in remote sensing.

    Get PDF
    As a fundamental task in remote sensing observation of the earth, change detection using hyperspectral images (HSI) features high accuracy due to the combination of the rich spectral and spatial information, especially for identifying land-cover variations in bi-temporal HSIs. Relying on the image difference, existing HSI change detection methods fail to preserve the spectral characteristics and suffer from high data dimensionality, making them extremely challenging to deal with changing areas of various sizes. To tackle these challenges, we propose a cross-band 2-D self-attention Network (CBANet) for end-to-end HSI change detection. By embedding a cross-band feature extraction module into a 2-D spatial-spectral self-attention module, CBANet is highly capable of extracting the spectral difference of matching pixels by considering the correlation between adjacent pixels. The CBANet has shown three key advantages: 1) less parameters and high efficiency; 2) high efficacy of extracting representative spectral information from bi-temporal images; and 3) high stability and accuracy for identifying both sparse sporadic changing pixels and large changing areas whilst preserving the edges. Comprehensive experiments on three publicly available datasets have fully validated the efficacy and efficiency of the proposed methodology

    Learning representations in the hyperspectral domain in aerial imagery

    Get PDF
    We establish two new datasets with baselines and network architectures for the task of hyperspectral image analysis. The first dataset, AeroRIT, is a moving camera static scene captured from a flight and contains per pixel labeling across five categories for the task of semantic segmentation. The second dataset, RooftopHSI, helps design and interpret learnt features on hyperspectral object detection on scenes captured from an university rooftop. This dataset accounts for static camera, moving scene hyperspectral imagery. We further broaden the scope of our understanding of neural networks with the development of two novel algorithms - S4AL and S4AL+. We develop these frameworks on natural (color) imagery, by combining semi-supervised learning and active learning, and display promising results for learning with limited amount of labeled data, which can be extended to hyperspectral imagery. In this dissertation, we curated two new datasets for hyperspectral image analysis, significantly larger than existing datasets and broader in terms of categories for classification. We then adapt existing neural network architectures to function on the increased channel information, in a smart manner, to leverage all hyperspectral information. We also develop novel active learning algorithms on natural (color) imagery, and discuss the hope for expanding their functionality to hyperspectral imagery

    Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review

    Get PDF
    Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends

    Unlocking the capabilities of explainable fewshot learning in remote sensing

    Full text link
    Recent advancements have significantly improved the efficiency and effectiveness of deep learning methods for imagebased remote sensing tasks. However, the requirement for large amounts of labeled data can limit the applicability of deep neural networks to existing remote sensing datasets. To overcome this challenge, fewshot learning has emerged as a valuable approach for enabling learning with limited data. While previous research has evaluated the effectiveness of fewshot learning methods on satellite based datasets, little attention has been paid to exploring the applications of these methods to datasets obtained from UAVs, which are increasingly used in remote sensing studies. In this review, we provide an up to date overview of both existing and newly proposed fewshot classification techniques, along with appropriate datasets that are used for both satellite based and UAV based data. Our systematic approach demonstrates that fewshot learning can effectively adapt to the broader and more diverse perspectives that UAVbased platforms can provide. We also evaluate some SOTA fewshot approaches on a UAV disaster scene classification dataset, yielding promising results. We emphasize the importance of integrating XAI techniques like attention maps and prototype analysis to increase the transparency, accountability, and trustworthiness of fewshot models for remote sensing. Key challenges and future research directions are identified, including tailored fewshot methods for UAVs, extending to unseen tasks like segmentation, and developing optimized XAI techniques suited for fewshot remote sensing problems. This review aims to provide researchers and practitioners with an improved understanding of fewshot learnings capabilities and limitations in remote sensing, while highlighting open problems to guide future progress in efficient, reliable, and interpretable fewshot methods.Comment: Under review, once the paper is accepted, the copyright will be transferred to the corresponding journa

    Rotation-Invariant Deep Embedding for Remote Sensing Images

    Get PDF
    Endowing convolutional neural networks (CNNs) with the rotation-invariant capability is important for characterizing the semantic contents of remote sensing (RS) images since they do not have typical orientations. Most of the existing deep methods for learning rotation-invariant CNN models are based on the design of proper convolutional or pooling layers, which aims at predicting the correct category labels of the rotated RS images equivalently. However, a few works have focused on learning rotation-invariant embeddings in the framework of deep metric learning for modeling the fine-grained semantic relationships among RS images in the embedding space. To fill this gap, we first propose a rule that the deep embeddings of rotated images should be closer to each other than those of any other images (including the images belonging to the same class). Then, we propose to maximize the joint probability of the leave-one-out image classification and rotational image identification. With the assumption of independence, such optimization leads to the minimization of a novel loss function composed of two terms: 1) a class-discrimination term and 2) a rotation-invariant term. Furthermore, we introduce a penalty parameter that balances these two terms and further propose a final loss to Rotation-invariant Deep embedding for RS images, termed RiDe. Extensive experiments conducted on two benchmark RS datasets validate the effectiveness of the proposed approach and demonstrate its superior performance when compared to other state-of-the-art methods. The codes of this article will be publicly available at https://github.com/jiankang1991/TGRS_RiDe

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing
    corecore