851 research outputs found

    Multiscale spatial-spectral convolutional network with image-based framework for hyperspectral imagery classification.

    Get PDF
    Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches as input classifier. This limits the range of use for spatial neighbor information and reduces processing efficiency in training and testing. To overcome this problem, we propose an image-based classification framework that is efficient and straight forward. Based on this framework, we propose a multiscale spatial-spectral CNN for HSIs (HyMSCN) to integrate both multiple receptive fields fused features and multiscale spatial features at different levels. The fused features are exploited using a lightweight block called the multiple receptive field feature block (MRFF), which contains various types of dilation convolution. By fusing multiple receptive field features and multiscale spatial features, the HyMSCN has comprehensive feature representation for classification. Experimental results from three real hyperspectral images prove the efficiency of the proposed framework. The proposed method also achieves superior performance for HSI classification

    State-of-the-art and gaps for deep learning on limited training data in remote sensing

    Full text link
    Deep learning usually requires big data, with respect to both volume and variety. However, most remote sensing applications only have limited training data, of which a small subset is labeled. Herein, we review three state-of-the-art approaches in deep learning to combat this challenge. The first topic is transfer learning, in which some aspects of one domain, e.g., features, are transferred to another domain. The next is unsupervised learning, e.g., autoencoders, which operate on unlabeled data. The last is generative adversarial networks, which can generate realistic looking data that can fool the likes of both a deep learning network and human. The aim of this article is to raise awareness of this dilemma, to direct the reader to existing work and to highlight current gaps that need solving.Comment: arXiv admin note: text overlap with arXiv:1709.0030

    Fusion of PCA and segmented-PCA domain multiscale 2-D-SSA for effective spectral-spatial feature extraction and data classification in hyperspectral imagery.

    Get PDF
    As hyperspectral imagery (HSI) contains rich spectral and spatial information, a novel principal component analysis (PCA) and segmented-PCA (SPCA)-based multiscale 2-D-singular spectrum analysis (2-D-SSA) fusion method is proposed for joint spectral–spatial HSI feature extraction and classification. Considering the overall spectra and adjacent band correlations of objects, the PCA and SPCA methods are utilized first for spectral dimension reduction, respectively. Then, multiscale 2-D-SSA is applied onto the SPCA dimension-reduced images to extract abundant spatial features at different scales, where PCA is applied again for dimensionality reduction. The obtained multiscale spatial features are then fused with the global spectral features derived from PCA to form multiscale spectral–spatial features (MSF-PCs). The performance of the extracted MSF-PCs is evaluated using the support vector machine (SVM) classifier. Experiments on four benchmark HSI data sets have shown that the proposed method outperforms other state-of-the-art feature extraction methods, including several deep learning approaches, when only a small number of training samples are available

    Impact of Feature Representation on Remote Sensing Image Retrieval

    Get PDF
    Remote sensing images are acquired using special platforms, sensors and are classified as aerial, multispectral and hyperspectral images. Multispectral and hyperspectral images are represented using large spectral vectors as compared to normal Red, Green, Blue (RGB) images. Hence, remote sensing image retrieval process from large archives is a challenging task.  Remote sensing image retrieval mainly consist of feature representation as first step and finding out similar images to a query image as second step. Feature representation plays important part in the performance of remote sensing image retrieval process. Research work focuses on impact of feature representation of remote sensing images on the performance of remote sensing image retrieval. This study shows that more discriminative features of remote sensing images are needed to improve performance of remote sensing image retrieval process

    CBANet: an end-to-end cross band 2-D attention network for hyperspectral change detection in remote sensing.

    Get PDF
    As a fundamental task in remote sensing observation of the earth, change detection using hyperspectral images (HSI) features high accuracy due to the combination of the rich spectral and spatial information, especially for identifying land-cover variations in bi-temporal HSIs. Relying on the image difference, existing HSI change detection methods fail to preserve the spectral characteristics and suffer from high data dimensionality, making them extremely challenging to deal with changing areas of various sizes. To tackle these challenges, we propose a cross-band 2-D self-attention Network (CBANet) for end-to-end HSI change detection. By embedding a cross-band feature extraction module into a 2-D spatial-spectral self-attention module, CBANet is highly capable of extracting the spectral difference of matching pixels by considering the correlation between adjacent pixels. The CBANet has shown three key advantages: 1) less parameters and high efficiency; 2) high efficacy of extracting representative spectral information from bi-temporal images; and 3) high stability and accuracy for identifying both sparse sporadic changing pixels and large changing areas whilst preserving the edges. Comprehensive experiments on three publicly available datasets have fully validated the efficacy and efficiency of the proposed methodology

    Deep feature fusion via two-stream convolutional neural network for hyperspectral image classification

    Get PDF
    The representation power of convolutional neural network (CNN) models for hyperspectral image (HSI) analysis is in practice limited by the available amount of the labeled samples, which is often insufficient to sustain deep networks with many parameters. We propose a novel approach to boost the network representation power with a two-stream 2-D CNN architecture. The proposed method extracts simultaneously, the spectral features and local spatial and global spatial features, with two 2-D CNN networks and makes use of channel correlations to identify the most informative features. Moreover, we propose a layer-specific regularization and a smooth normalization fusion scheme to adaptively learn the fusion weights for the spectral-spatial features from the two parallel streams. An important asset of our model is the simultaneous training of the feature extraction, fusion, and classification processes with the same cost function. Experimental results on several hyperspectral data sets demonstrate the efficacy of the proposed method compared with the state-of-the-art methods in the field

    Deep learning-based change detection in remote sensing images:a review

    Get PDF
    Images gathered from different satellites are vastly available these days due to the fast development of remote sensing (RS) technology. These images significantly enhance the data sources of change detection (CD). CD is a technique of recognizing the dissimilarities in the images acquired at distinct intervals and are used for numerous applications, such as urban area development, disaster management, land cover object identification, etc. In recent years, deep learning (DL) techniques have been used tremendously in change detection processes, where it has achieved great success because of their practical applications. Some researchers have even claimed that DL approaches outperform traditional approaches and enhance change detection accuracy. Therefore, this review focuses on deep learning techniques, such as supervised, unsupervised, and semi-supervised for different change detection datasets, such as SAR, multispectral, hyperspectral, VHR, and heterogeneous images, and their advantages and disadvantages will be highlighted. In the end, some significant challenges are discussed to understand the context of improvements in change detection datasets and deep learning models. Overall, this review will be beneficial for the future development of CD methods

    Deep learning for remote sensing image classification:A survey

    Get PDF
    Remote sensing (RS) image classification plays an important role in the earth observation technology using RS data, having been widely exploited in both military and civil fields. However, due to the characteristics of RS data such as high dimensionality and relatively small amounts of labeled samples available, performing RS image classification faces great scientific and practical challenges. In recent years, as new deep learning (DL) techniques emerge, approaches to RS image classification with DL have achieved significant breakthroughs, offering novel opportunities for the research and development of RS image classification. In this paper, a brief overview of typical DL models is presented first. This is followed by a systematic review of pixel?wise and scene?wise RS image classification approaches that are based on the use of DL. A comparative analysis regarding the performances of typical DL?based RS methods is also provided. Finally, the challenges and potential directions for further research are discussedpublishersversionPeer reviewe
    • …
    corecore