4 research outputs found
Spectral-spatial classification of hyperspectral images: three tricks and a new supervised learning setting
Spectral-spatial classification of hyperspectral images has been the subject
of many studies in recent years. In the presence of only very few labeled
pixels, this task becomes challenging. In this paper we address the following
two research questions: 1) Can a simple neural network with just a single
hidden layer achieve state of the art performance in the presence of few
labeled pixels? 2) How is the performance of hyperspectral image classification
methods affected when using disjoint train and test sets? We give a positive
answer to the first question by using three tricks within a very basic shallow
Convolutional Neural Network (CNN) architecture: a tailored loss function, and
smooth- and label-based data augmentation. The tailored loss function enforces
that neighborhood wavelengths have similar contributions to the features
generated during training. A new label-based technique here proposed favors
selection of pixels in smaller classes, which is beneficial in the presence of
very few labeled pixels and skewed class distributions. To address the second
question, we introduce a new sampling procedure to generate disjoint train and
test set. Then the train set is used to obtain the CNN model, which is then
applied to pixels in the test set to estimate their labels. We assess the
efficacy of the simple neural network method on five publicly available
hyperspectral images. On these images our method significantly outperforms
considered baselines. Notably, with just 1% of labeled pixels per class, on
these datasets our method achieves an accuracy that goes from 86.42%
(challenging dataset) to 99.52% (easy dataset). Furthermore we show that the
simple neural network method improves over other baselines in the new
challenging supervised setting. Our analysis substantiates the highly
beneficial effect of using the entire image (so train and test data) for
constructing a model.Comment: Remote Sensing 201
Vector attribute profiles for hyperspectral image classification
International audienceMorphological attribute profiles are among the most prominent spectral-spatial pixel description methods. They are efficient, effective and highly customizable multi-scale tools based on hierarchical representations of a scalar input image. Their application to multivariate images in general, and hyperspectral images in particular, has been so far conducted using the marginal strategy, i.e. by processing each image band (eventually obtained through a dimension reduction technique) independently. In this paper, we investigate the alternative vector strategy, which consists in processing the available image bands simultaneously. The vector strategy is based on a vector ordering relation that leads to the computation of a single max-and min-tree per hyperspectral dataset, from which attribute profiles can then be computed as usual. We explore known vector ordering relations for constructing such max-trees and subsequently vector attribute profiles, and introduce a combination of marginal and vector strategies. We provide an experimental comparison of these approaches in the context of hyperspectral classification with common datasets, where the proposed approach outperforms the widely used marginal strategy
Local Feature-Based Attribute Profiles for Optical Remote Sensing Image Classification
International audienceThis article introduces an extension of morphological attribute profiles (APs) by extracting their local features. The so-called local feature-based attribute profiles (LFAPs) are expected to provide a better characterization of each APs' filtered pixel (i.e. APs' sample) within its neighborhood, hence better deal with local texture information from the image content. In this work, LFAPs are constructed by extracting some simple first-order statistical features of the local patch around each APs' sample such as mean, standard deviation, range, etc. Then, the final feature vector characterizing each image pixel is formed by combining all local features extracted from APs of that pixel. In addition, since the self-dual attribute profiles (SDAPs) has been proved to outperform the APs in recent years, a similar process will be applied to form the local feature-based SDAPs (LFSDAPs). In order to evaluate the effectiveness of LFAPs and LFSDAPs, supervised classification using both the Random Forest and the Support Vector Machine classifiers is performed on the very high resolution Reykjavik image as well as the hyperspectral Pavia University data. Experimental results show that LFAPs (resp. LFSDAPs) can considerably improve the classification accuracy of the standard APs (resp. SDAPs) and the recently proposed histogram-based APs (HAPs)