33 research outputs found
Take a Ramble into Solution Spaces for Classification Problems in Neural Networks
International audienc
RVM Classification of Hyperspectral Images Based on Wavelet Kernel Non-negative Matrix Fractorization
A novel kernel framework for hyperspectral image classification based on relevance vector machine (RVM) is presented in this paper. The new feature extraction algorithm based on Mexican hat wavelet kernel non-negative matrix factorization (WKNMF) for hyperspectral remote sensing images is proposed. By using the feature of multi-resolution analysis, the new method of nonlinear mapping capability based on kernel NMF can be improved. The new classification framework of hyperspectral image data combined with the novel WKNMF and RVM. The simulation experimental results on HYDICE and AVIRIS data sets are both show that the classification accuracy of proposed method compared with other experiment methods even can be improved over 10% in some cases and the classification precision of small sample data area can be improved effectively
Train Support Vector Machine Using Fuzzy C-means Without a Prior Knowledge for Hyperspectral Image Content Classification
In this paper, a new cooperative classification method called auto-train support vector machine (SVM) is proposed. This new method converts indirectly SVM to an unsupervised classification method. The main disadvantage of conventional SVM is that it needs a priori knowledge about the data to train it. To avoid using this knowledge that is strictly required to train SVM, in this cooperative method, the data, that is, hyperspectral images (HSIs), are first clustered using Fuzzy C-means (FCM); then, the created labels are used to train SVM. At this stage, the image content is classified using the auto-trained SVM. Using FCM, clustering reveals how strongly a pixel is assigned to a class thanks to the fuzzification process. This information leads to gaining two advantages, the first one is that no prior knowledge about the data (known labels) is needed and the second one is that the training data selection is not done randomly (the training data are selected according to their degree of membership to a class). The proposed method gives very promising results. The method is tested on two HSIs, which are Indian Pines and Pavia University. The results obtained have a very high accuracy of the classification and exceed the existing manually trained methods in the literature
Spectral-spatial classification of hyperspectral images: three tricks and a new supervised learning setting
Spectral-spatial classification of hyperspectral images has been the subject
of many studies in recent years. In the presence of only very few labeled
pixels, this task becomes challenging. In this paper we address the following
two research questions: 1) Can a simple neural network with just a single
hidden layer achieve state of the art performance in the presence of few
labeled pixels? 2) How is the performance of hyperspectral image classification
methods affected when using disjoint train and test sets? We give a positive
answer to the first question by using three tricks within a very basic shallow
Convolutional Neural Network (CNN) architecture: a tailored loss function, and
smooth- and label-based data augmentation. The tailored loss function enforces
that neighborhood wavelengths have similar contributions to the features
generated during training. A new label-based technique here proposed favors
selection of pixels in smaller classes, which is beneficial in the presence of
very few labeled pixels and skewed class distributions. To address the second
question, we introduce a new sampling procedure to generate disjoint train and
test set. Then the train set is used to obtain the CNN model, which is then
applied to pixels in the test set to estimate their labels. We assess the
efficacy of the simple neural network method on five publicly available
hyperspectral images. On these images our method significantly outperforms
considered baselines. Notably, with just 1% of labeled pixels per class, on
these datasets our method achieves an accuracy that goes from 86.42%
(challenging dataset) to 99.52% (easy dataset). Furthermore we show that the
simple neural network method improves over other baselines in the new
challenging supervised setting. Our analysis substantiates the highly
beneficial effect of using the entire image (so train and test data) for
constructing a model.Comment: Remote Sensing 201