490 research outputs found
Unsupervised spectral sub-feature learning for hyperspectral image classification
Spectral pixel classification is one of the principal techniques used in hyperspectral image (HSI) analysis. In this article, we propose an unsupervised feature learning method for classification of hyperspectral images. The proposed method learns a dictionary of sub-feature basis representations from the spectral domain, which allows effective use of the correlated spectral data. The learned dictionary is then used in encoding convolutional samples from the hyperspectral input pixels to an expanded but sparse feature space. Expanded hyperspectral feature representations enable linear separation between object classes present in an image. To evaluate the proposed method, we performed experiments on several commonly used HSI data sets acquired at different locations and by different sensors. Our experimental results show that the proposed method outperforms other pixel-wise classification methods that make use of unsupervised feature extraction approaches. Additionally, even though our approach does not use any prior knowledge, or labelled training data to learn features, it yields either advantageous, or comparable, results in terms of classification accuracy with respect to recent semi-supervised methods
Joint & Progressive Learning from High-Dimensional Data for Multi-Label Classification
Despite the fact that nonlinear subspace learning techniques (e.g. manifold
learning) have successfully applied to data representation, there is still room
for improvement in explainability (explicit mapping), generalization
(out-of-samples), and cost-effectiveness (linearization). To this end, a novel
linearized subspace learning technique is developed in a joint and progressive
way, called \textbf{j}oint and \textbf{p}rogressive \textbf{l}earning
str\textbf{a}teg\textbf{y} (J-Play), with its application to multi-label
classification. The J-Play learns high-level and semantically meaningful
feature representation from high-dimensional data by 1) jointly performing
multiple subspace learning and classification to find a latent subspace where
samples are expected to be better classified; 2) progressively learning
multi-coupled projections to linearly approach the optimal mapping bridging the
original space with the most discriminative subspace; 3) locally embedding
manifold structure in each learnable latent subspace. Extensive experiments are
performed to demonstrate the superiority and effectiveness of the proposed
method in comparison with previous state-of-the-art methods.Comment: accepted in ECCV 201
Optimized kernel minimum noise fraction transformation for hyperspectral image classification
This paper presents an optimized kernel minimum noise fraction transformation (OKMNF) for feature extraction of hyperspectral imagery. The proposed approach is based on the kernel minimum noise fraction (KMNF) transformation, which is a nonlinear dimensionality reduction method. KMNF can map the original data into a higher dimensional feature space and provide a small number of quality features for classification and some other post processing. Noise estimation is an important component in KMNF. It is often estimated based on a strong relationship between adjacent pixels. However, hyperspectral images have limited spatial resolution and usually have a large number of mixed pixels, which make the spatial information less reliable for noise estimation. It is the main reason that KMNF generally shows unstable performance in feature extraction for classification. To overcome this problem, this paper exploits the use of a more accurate noise estimation method to improve KMNF. We propose two new noise estimation methods accurately. Moreover, we also propose a framework to improve noise estimation, where both spectral and spatial de-correlation are exploited. Experimental results, conducted using a variety of hyperspectral images, indicate that the proposed OKMNF is superior to some other related dimensionality reduction methods in most cases. Compared to the conventional KMNF, the proposed OKMNF benefits significant improvements in overall classification accuracy
Semisupervised hypergraph discriminant learning for dimensionality reduction of hyperspectral image.
Semisupervised learning is an effective technique to represent the intrinsic features of a hyperspectral image (HSI), which can reduce the cost to obtain the labeled information of samples. However, traditional semisupervised learning methods fail to consider multiple properties of an HSI, which has restricted the discriminant performance of feature representation. In this article, we introduce the hypergraph into semisupervised learning to reveal the complex multistructures of an HSI, and construct a semisupervised discriminant hypergraph learning (SSDHL) method by designing an intraclass hypergraph and an interclass graph with the labeled samples. SSDHL constructs an unsupervised hypergraph with the unlabeled samples. In addition, a total scatter matrix is used to measure the distribution of the labeled and unlabeled samples. Then, a low-dimensional projection function is constructed to compact the properties of the intraclass hypergraph and the unsupervised hypergraph, and simultaneously separate the characteristics of the interclass graph and the total scatter matrix. Finally, according to the objective function, we can obtain the projection matrix and the low-dimensional features. Experiments on three HSI data sets (Botswana, KSC, and PaviaU) show that the proposed method can achieve better classification results compared with a few state-of-the-art methods. The result indicates that SSDHL can simultaneously utilize the labeled and unlabeled samples to represent the homogeneous properties and restrain the heterogeneous characteristics of an HSI
Investigation of feature extraction algorithms and techniques for hyperspectral images.
Doctor of Philosophy (Computer Engineering). University of KwaZulu-Natal. Durban, 2017.Hyperspectral images (HSIs) are remote-sensed images that are characterized
by very high spatial and spectral dimensions and nd applications, for example,
in land cover classi cation, urban planning and management, security and food
processing. Unlike conventional three bands RGB images, their high
dimensional data space creates a challenge for traditional image processing
techniques which are usually based on the assumption that there exists
su cient training samples in order to increase the likelihood of high
classi cation accuracy. However, the high cost and di culty of obtaining
ground truth of hyperspectral data sets makes this assumption unrealistic and
necessitates the introduction of alternative methods for their processing.
Several techniques have been developed in the exploration of the rich spectral
and spatial information in HSIs. Speci cally, feature extraction (FE)
techniques are introduced in the processing of HSIs as a necessary step before
classi cation. They are aimed at transforming the high dimensional data of the
HSI into one of a lower dimension while retaining as much spatial and/or
spectral information as possible. In this research, we develop semi-supervised
FE techniques which combine features of supervised and unsupervised
techniques into a single framework for the processing of HSIs. Firstly, we
developed a feature extraction algorithm known as Semi-Supervised Linear
Embedding (SSLE) for the extraction of features in HSI. The algorithm
combines supervised Linear Discriminant Analysis (LDA) and unsupervised
Local Linear Embedding (LLE) to enhance class discrimination while also
preserving the properties of classes of interest. The technique was developed
based on the fact that LDA extracts features from HSIs by discriminating
between classes of interest and it can only extract C 1 features provided there
are C classes in the image by extracting features that are equivalent to the
number of classes in the HSI. Experiments show that the SSLE algorithm
overcomes the limitation of LDA and extracts features that are equivalent to
ii
iii
the number of classes in HSIs. Secondly, a graphical manifold dimension
reduction (DR) algorithm known as Graph Clustered Discriminant Analysis
(GCDA) is developed. The algorithm is developed to dynamically select labeled
samples from the pool of available unlabeled samples in order to complement
the few available label samples in HSIs. The selection is achieved by entwining
K-means clustering with a semi-supervised manifold discriminant analysis.
Using two HSI data sets, experimental results show that GCDA extracts
features that are equivalent to the number of classes with high classi cation
accuracy when compared with other state-of-the-art techniques. Furthermore,
we develop a window-based partitioning approach to preserve the spatial
properties of HSIs when their features are being extracted. In this approach,
the HSI is partitioned along its spatial dimension into n windows and the
covariance matrices of each window are computed. The covariance matrices of
the windows are then merged into a single matrix through using the Kalman
ltering approach so that the resulting covariance matrix may be used for
dimension reduction. Experiments show that the windowing approach achieves
high classi cation accuracy and preserves the spatial properties of HSIs. For
the proposed feature extraction techniques, Support Vector Machine (SVM)
and Neural Networks (NN) classi cation techniques are employed and their
performances are compared for these two classi ers. The performances of all
proposed FE techniques have also been shown to outperform other
state-of-the-art approaches
- …