49 research outputs found

    A REVIEW ON MULTIPLE-FEATURE-BASED ADAPTIVE SPARSE REPRESENTATION (MFASR) AND OTHER CLASSIFICATION TYPES

    Get PDF
    A new technique Multiple-feature-based adaptive sparse representation (MFASR) has been demonstrated for Hyperspectral Images (HSI's) classification. This method involves mainly in four steps at the various stages. The spectral and spatial information reflected from the original Hyperspectral Images with four various features. A shape adaptive (SA) spatial region is obtained in each pixel region at the second step. The algorithm namely sparse representation has applied to get the coefficients of sparse for each shape adaptive region in the form of matrix with multiple features. For each test pixel, the class label is determined with the help of obtained coefficients. The performances of MFASR have much better classification results than other classifiers in the terms of quantitative and qualitative percentage of results. This MFASR will make benefit of strong correlations that are obtained from different extracted features and this make use of effective features and effective adaptive sparse representation. Thus, the very high classification performance was achieved through this MFASR technique

    Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images

    Full text link
    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation hasn't efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address the these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient

    Remote Sensing Image Scene Classification: Benchmark and State of the Art

    Full text link
    Remote sensing image scene classification plays an important role in a wide range of applications and hence has been receiving remarkable attention. During the past years, significant efforts have been made to develop various datasets or present a variety of approaches for scene classification from remote sensing images. However, a systematic review of the literature concerning datasets and methods for scene classification is still lacking. In addition, almost all existing datasets have a number of limitations, including the small scale of scene classes and the image numbers, the lack of image variations and diversity, and the saturation of accuracy. These limitations severely limit the development of new approaches especially deep learning-based methods. This paper first provides a comprehensive review of the recent progress. Then, we propose a large-scale dataset, termed "NWPU-RESISC45", which is a publicly available benchmark for REmote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU). This dataset contains 31,500 images, covering 45 scene classes with 700 images in each class. The proposed NWPU-RESISC45 (i) is large-scale on the scene classes and the total image number, (ii) holds big variations in translation, spatial resolution, viewpoint, object pose, illumination, background, and occlusion, and (iii) has high within-class diversity and between-class similarity. The creation of this dataset will enable the community to develop and evaluate various data-driven algorithms. Finally, several representative methods are evaluated using the proposed dataset and the results are reported as a useful baseline for future research.Comment: This manuscript is the accepted version for Proceedings of the IEE

    Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects

    Get PDF
    Hyperspectral Imaging (HSI) has been extensively utilized in many real-life applications because it benefits from the detailed spectral information contained in each pixel. Notably, the complex characteristics i.e., the nonlinear relation among the captured spectral information and the corresponding object of HSI data make accurate classification challenging for traditional methods. In the last few years, Deep Learning (DL) has been substantiated as a powerful feature extractor that effectively addresses the nonlinear problems that appeared in a number of computer vision tasks. This prompts the deployment of DL for HSI classification (HSIC) which revealed good performance. This survey enlists a systematic overview of DL for HSIC and compared state-of-the-art strategies of the said topic. Primarily, we will encapsulate the main challenges of traditional machine learning for HSIC and then we will acquaint the superiority of DL to address these problems. This survey breakdown the state-of-the-art DL frameworks into spectral-features, spatial-features, and together spatial-spectral features to systematically analyze the achievements (future research directions as well) of these frameworks for HSIC. Moreover, we will consider the fact that DL requires a large number of labeled training examples whereas acquiring such a number for HSIC is challenging in terms of time and cost. Therefore, this survey discusses some strategies to improve the generalization performance of DL strategies which can provide some future guidelines

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Multiscale 2-D singular spectrum analysis and principal component analysis for spatial–spectral noise-robust feature extraction and classification of hyperspectral images.

    Get PDF
    In hyperspectral images (HSI), most feature extraction and data classification methods rely on corrected dataset, in which the noisy and water absorption bands are removed. This can result in not only extra working burden but also information loss from removed bands. To tackle these issues, in this article, we propose a novel spatial-spectral feature extraction framework, multiscale 2-D singular spectrum analysis (2-D-SSA) with principal component analysis (PCA) (2-D-MSSP), for noise-robust feature extraction and data classification of HSI. First, multiscale 2-D-SSA is applied to exploit the multiscale spatial features in each spectral band of HSI via extracting the varying trends within defined windows. Taking the extracted trend signals at each scale level as the input features, the PCA is employed to the spectral domain for dimensionality reduction and spatial-spectral feature extraction. The derived spatial-spectral features in each scale are separately classified and then fused at decision-level for efficacy. As our 2-D-MSSP method can extract features and simultaneously remove noise in both spatial and spectral domains, which ensures it to be noise-robust for classification of HSI, even the uncorrected dataset. Experiments on three publicly available datasets have fully validated the efficacy and robustness of the proposed approach, when benchmarked with 10 state-of-the-art classifiers, including six spatial-spectral methods and four deep learning classifiers. In addition, both quantitative and qualitative assessment has validated the efficacy of our approach in noise-robust classification of HSI even with limited training samples, especially in classifying uncorrected data without filtering noisy bands

    Hyperspectral Unmixing Based on Dual-Depth Sparse Probabilistic Latent Semantic Analysis

    Get PDF
    This paper presents a novel approach for spectral unmixing of remotely sensed hyperspectral data. It exploits probabilistic latent topics in order to take advantage of the semantics pervading the latent topic space when identifying spectral signatures and estimating fractional abundances from hyperspectral images. Despite the contrasted potential of topic models to uncover image semantics, they have been merely used in hyperspectral unmixing as a straightforward data decomposition process. This limits their actual capabilities to provide semantic representations of the spectral data. The proposed model, called dual-depth sparse probabilistic latent semantic analysis (DEpLSA), makes use of two different levels of topics to exploit the semantic patterns extracted from the initial spectral space in order to relieve the ill-posed nature of the unmixing problem. In other words, DEpLSA defines a first level of deep topics to capture the semantic representations of the spectra, and a second level of restricted topics to estimate endmembers and abundances over this semantic space. An experimental comparison in conducted using the two standard topic models and the seven state-of-the-art unmixing methods available in the literature. Our experiments, conducted using four different hyperspectral images, reveal that the proposed approach is able to provide competitive advantages over available unmixing approaches
    corecore