1,500 research outputs found

    Advances in Hyperspectral Image Classification: Earth monitoring with statistical learning methods

    Full text link
    Hyperspectral images show similar statistical properties to natural grayscale or color photographic images. However, the classification of hyperspectral images is more challenging because of the very high dimensionality of the pixels and the small number of labeled examples typically available for learning. These peculiarities lead to particular signal processing problems, mainly characterized by indetermination and complex manifolds. The framework of statistical learning has gained popularity in the last decade. New methods have been presented to account for the spatial homogeneity of images, to include user's interaction via active learning, to take advantage of the manifold structure with semisupervised learning, to extract and encode invariances, or to adapt classifiers and image representations to unseen yet similar scenes. This tutuorial reviews the main advances for hyperspectral remote sensing image classification through illustrative examples.Comment: IEEE Signal Processing Magazine, 201

    Nonlinear unmixing of hyperspectral images using a semiparametric model and spatial regularization

    Full text link
    Incorporating spatial information into hyperspectral unmixing procedures has been shown to have positive effects, due to the inherent spatial-spectral duality in hyperspectral scenes. Current research works that consider spatial information are mainly focused on the linear mixing model. In this paper, we investigate a variational approach to incorporating spatial correlation into a nonlinear unmixing procedure. A nonlinear algorithm operating in reproducing kernel Hilbert spaces, associated with an â„“1\ell_1 local variation norm as the spatial regularizer, is derived. Experimental results, with both synthetic and real data, illustrate the effectiveness of the proposed scheme.Comment: 5 pages, 1 figure, submitted to ICASSP 201

    A novel band selection and spatial noise reduction method for hyperspectral image classification.

    Get PDF
    As an essential reprocessing method, dimensionality reduction (DR) can reduce the data redundancy and improve the performance of hyperspectral image (HSI) classification. A novel unsupervised DR framework with feature interpretability, which integrates both band selection (BS) and spatial noise reduction method, is proposed to extract low-dimensional spectral-spatial features of HSI. We proposed a new Neighboring band Grouping and Normalized Matching Filter (NGNMF) for BS, which can reduce the data dimension whilst preserve the corresponding spectral information. An enhanced 2-D singular spectrum analysis (E2DSSA) method is also proposed to extract the spatial context and structural information from each selected band, aiming to decrease the intra-class variability and reduce the effect of noise in the spatial domain. The support vector machine (SVM) classifier is used to evaluate the effectiveness of the extracted spectral-spatial low-dimensional features. Experimental results on three publicly available HSI datasets have fully demonstrated the efficacy of the proposed NGNMF-E2DSSA method, which has surpassed a number of state-of-the-art DR methods

    Spectral-spatial classification of hyperspectral images: three tricks and a new supervised learning setting

    Get PDF
    Spectral-spatial classification of hyperspectral images has been the subject of many studies in recent years. In the presence of only very few labeled pixels, this task becomes challenging. In this paper we address the following two research questions: 1) Can a simple neural network with just a single hidden layer achieve state of the art performance in the presence of few labeled pixels? 2) How is the performance of hyperspectral image classification methods affected when using disjoint train and test sets? We give a positive answer to the first question by using three tricks within a very basic shallow Convolutional Neural Network (CNN) architecture: a tailored loss function, and smooth- and label-based data augmentation. The tailored loss function enforces that neighborhood wavelengths have similar contributions to the features generated during training. A new label-based technique here proposed favors selection of pixels in smaller classes, which is beneficial in the presence of very few labeled pixels and skewed class distributions. To address the second question, we introduce a new sampling procedure to generate disjoint train and test set. Then the train set is used to obtain the CNN model, which is then applied to pixels in the test set to estimate their labels. We assess the efficacy of the simple neural network method on five publicly available hyperspectral images. On these images our method significantly outperforms considered baselines. Notably, with just 1% of labeled pixels per class, on these datasets our method achieves an accuracy that goes from 86.42% (challenging dataset) to 99.52% (easy dataset). Furthermore we show that the simple neural network method improves over other baselines in the new challenging supervised setting. Our analysis substantiates the highly beneficial effect of using the entire image (so train and test data) for constructing a model.Comment: Remote Sensing 201
    • …
    corecore