35 research outputs found

    Graph Embedding via High Dimensional Model Representation for Hyperspectral Images

    Full text link
    Learning the manifold structure of remote sensing images is of paramount relevance for modeling and understanding processes, as well as to encapsulate the high dimensionality in a reduced set of informative features for subsequent classification, regression, or unmixing. Manifold learning methods have shown excellent performance to deal with hyperspectral image (HSI) analysis but, unless specifically designed, they cannot provide an explicit embedding map readily applicable to out-of-sample data. A common assumption to deal with the problem is that the transformation between the high-dimensional input space and the (typically low) latent space is linear. This is a particularly strong assumption, especially when dealing with hyperspectral images due to the well-known nonlinear nature of the data. To address this problem, a manifold learning method based on High Dimensional Model Representation (HDMR) is proposed, which enables to present a nonlinear embedding function to project out-of-sample samples into the latent space. The proposed method is compared to manifold learning methods along with its linear counterparts and achieves promising performance in terms of classification accuracy of a representative set of hyperspectral images.Comment: This is an accepted version of work to be published in the IEEE Transactions on Geoscience and Remote Sensing. 11 page

    Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images

    Full text link
    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation hasn't efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address the these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient

    Airborne Object Detection Using Hyperspectral Imaging: Deep Learning Review

    Full text link
    © 2019, Springer Nature Switzerland AG. Hyperspectral images have been increasingly important in object detection applications especially in remote sensing scenarios. Machine learning algorithms have become emerging tools for hyperspectral image analysis. The high dimensionality of hyperspectral images and the availability of simulated spectral sample libraries make deep learning an appealing approach. This report reviews recent data processing and object detection methods in the area including hand-crafted and automated feature extraction based on deep learning neural networks. The accuracy performances were compared according to existing reports as well as our own experiments (i.e., re-implementing and testing on new datasets). CNN models provided reliable performance of over 97% detection accuracy across a large set of HSI collections. A wide range of data were used: a rural area (Indian Pines data), an urban area (Pavia University), a wetland region (Botswana), an industrial field (Kennedy Space Center), to a farm site (Salinas). Note that, the Botswana set was not reviewed in recent works, thus high accuracy selected methods were newly compared in this work. A plain CNN model was also found to be able to perform comparably to its more complex variants in target detection applications

    Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing

    Full text link
    Hyperspectral imaging, also known as image spectrometry, is a landmark technique in geoscience and remote sensing (RS). In the past decade, enormous efforts have been made to process and analyze these hyperspectral (HS) products mainly by means of seasoned experts. However, with the ever-growing volume of data, the bulk of costs in manpower and material resources poses new challenges on reducing the burden of manual labor and improving efficiency. For this reason, it is, therefore, urgent to develop more intelligent and automatic approaches for various HS RS applications. Machine learning (ML) tools with convex optimization have successfully undertaken the tasks of numerous artificial intelligence (AI)-related applications. However, their ability in handling complex practical problems remains limited, particularly for HS data, due to the effects of various spectral variabilities in the process of HS imaging and the complexity and redundancy of higher dimensional HS signals. Compared to the convex models, non-convex modeling, which is capable of characterizing more complex real scenes and providing the model interpretability technically and theoretically, has been proven to be a feasible solution to reduce the gap between challenging HS vision tasks and currently advanced intelligent data processing models

    Superpixel nonlocal weighting joint sparse representation for hyperspectral image classification.

    Get PDF
    Joint sparse representation classification (JSRC) is a representative spectral–spatial classifier for hyperspectral images (HSIs). However, the JSRC is inappropriate for highly heterogeneous areas due to the spatial information being extracted from a fixed-sized neighborhood block, which is often unable to conform to the naturally irregular structure of land cover. To address this problem, a superpixel-based JSRC with nonlocal weighting, i.e., superpixel-based nonlocal weighted JSRC (SNLW-JSRC), is proposed in this paper. In SNLW-JSRC, the superpixel representation of an HSI is first constructed based on an entropy rate segmentation method. This strategy forms homogeneous neighborhoods with naturally irregular structures and alleviates the inclusion of pixels from different classes in the process of spatial information extraction. Afterwards, the superpixel-based nonlocal weighting (SNLW) scheme is built to weigh the superpixel based on its structural and spectral information. In this way, the weight of one specific neighboring pixel is determined by the local structural similarity between the neighboring pixel and the central test pixel. Then, the obtained local weights are used to generate the weighted mean data for each superpixel. Finally, JSRC is used to produce the superpixel-level classification. This speeds up the sparse representation and makes the spatial content more centralized and compact. To verify the proposed SNLW-JSRC method, we conducted experiments on four benchmark hyperspectral datasets, namely Indian Pines, Pavia University, Salinas, and DFC2013. The experimental results suggest that the SNLW-JSRC can achieve better classification results than the other four SRC-based algorithms and the classical support vector machine algorithm. Moreover, the SNLW-JSRC can also outperform the other SRC-based algorithms, even with a small number of training samples
    corecore