110 research outputs found
Advances in Hyperspectral Image Classification Methods for Vegetation and Agricultural Cropland Studies
Hyperspectral data are becoming more widely available via sensors on airborne and unmanned aerial vehicle (UAV) platforms, as well as proximal platforms. While space-based hyperspectral data continue to be limited in availability, multiple spaceborne Earth-observing missions on traditional platforms are scheduled for launch, and companies are experimenting with small satellites for constellations to observe the Earth, as well as for planetary missions. Land cover mapping via classification is one of the most important applications of hyperspectral remote sensing and will increase in significance as time series of imagery are more readily available. However, while the narrow bands of hyperspectral data provide new opportunities for chemistry-based modeling and mapping, challenges remain. Hyperspectral data are high dimensional, and many bands are highly correlated or irrelevant for a given classification problem. For supervised classification methods, the quantity of training data is typically limited relative to the dimension of the input space. The resulting Hughes phenomenon, often referred to as the curse of dimensionality, increases potential for unstable parameter estimates, overfitting, and poor generalization of classifiers. This is particularly problematic for parametric approaches such as Gaussian maximum likelihoodbased classifiers that have been the backbone of pixel-based multispectral classification methods. This issue has motivated investigation of alternatives, including regularization of the class covariance matrices, ensembles of weak classifiers, development of feature selection and extraction methods, adoption of nonparametric classifiers, and exploration of methods to exploit unlabeled samples via semi-supervised and active learning. Data sets are also quite large, motivating computationally efficient algorithms and implementations. This chapter provides an overview of the recent advances in classification methods for mapping vegetation using hyperspectral data. Three data sets that are used in the hyperspectral classification literature (e.g., Botswana Hyperion satellite data and AVIRIS airborne data over both Kennedy Space Center and Indian Pines) are described in Section 3.2 and used to illustrate methods described in the chapter. An additional high-resolution hyperspectral data set acquired by a SpecTIR sensor on an airborne platform over the Indian Pines area is included to exemplify the use of new deep learning approaches, and a multiplatform example of airborne hyperspectral data is provided to demonstrate transfer learning in hyperspectral image classification. Classical approaches for supervised and unsupervised feature selection and extraction are reviewed in Section 3.3. In particular, nonlinearities exhibited in hyperspectral imagery have motivated development of nonlinear feature extraction methods in manifold learning, which are outlined in Section 3.3.1.4. Spatial context is also important in classification of both natural vegetation with complex textural patterns and large agricultural fields with significant local variability within fields. Approaches to exploit spatial features at both the pixel level (e.g., co-occurrencebased texture and extended morphological attribute profiles [EMAPs]) and integration of segmentation approaches (e.g., HSeg) are discussed in this context in Section 3.3.2. Recently, classification methods that leverage nonparametric methods originating in the machine learning community have grown in popularity. An overview of both widely used and newly emerging approaches, including support vector machines (SVMs), Gaussian mixture models, and deep learning based on convolutional neural networks is provided in Section 3.4. Strategies to exploit unlabeled samples, including active learning and metric learning, which combine feature extraction and augmentation of the pool of training samples in an active learning framework, are outlined in Section 3.5. Integration of image segmentation with classification to accommodate spatial coherence typically observed in vegetation is also explored, including as an integrated active learning system. Exploitation of multisensor strategies for augmenting the pool of training samples is investigated via a transfer learning framework in Section 3.5.1.2. Finally, we look to the future, considering opportunities soon to be provided by new paradigms, as hyperspectral sensing is becoming common at multiple scales from ground-based and airborne autonomous vehicles to manned aircraft and space-based platforms
A REVIEW ON MULTIPLE-FEATURE-BASED ADAPTIVE SPARSE REPRESENTATION (MFASR) AND OTHER CLASSIFICATION TYPES
A new technique Multiple-feature-based adaptive sparse representation (MFASR) has been demonstrated for Hyperspectral Images (HSI's) classification. This method involves mainly in four steps at the various stages. The spectral and spatial information reflected from the original Hyperspectral Images with four various features. A shape adaptive (SA) spatial region is obtained in each pixel region at the second step. The algorithm namely sparse representation has applied to get the coefficients of sparse for each shape adaptive region in the form of matrix with multiple features. For each test pixel, the class label is determined with the help of obtained coefficients. The performances of MFASR have much better classification results than other classifiers in the terms of quantitative and qualitative percentage of results. This MFASR will make benefit of strong correlations that are obtained from different extracted features and this make use of effective features and effective adaptive sparse representation. Thus, the very high classification performance was achieved through this MFASR technique
Graph Embedding via High Dimensional Model Representation for Hyperspectral Images
Learning the manifold structure of remote sensing images is of paramount
relevance for modeling and understanding processes, as well as to encapsulate
the high dimensionality in a reduced set of informative features for subsequent
classification, regression, or unmixing. Manifold learning methods have shown
excellent performance to deal with hyperspectral image (HSI) analysis but,
unless specifically designed, they cannot provide an explicit embedding map
readily applicable to out-of-sample data. A common assumption to deal with the
problem is that the transformation between the high-dimensional input space and
the (typically low) latent space is linear. This is a particularly strong
assumption, especially when dealing with hyperspectral images due to the
well-known nonlinear nature of the data. To address this problem, a manifold
learning method based on High Dimensional Model Representation (HDMR) is
proposed, which enables to present a nonlinear embedding function to project
out-of-sample samples into the latent space. The proposed method is compared to
manifold learning methods along with its linear counterparts and achieves
promising performance in terms of classification accuracy of a representative
set of hyperspectral images.Comment: This is an accepted version of work to be published in the IEEE
Transactions on Geoscience and Remote Sensing. 11 page
Investigation of feature extraction algorithms and techniques for hyperspectral images.
Doctor of Philosophy (Computer Engineering). University of KwaZulu-Natal. Durban, 2017.Hyperspectral images (HSIs) are remote-sensed images that are characterized
by very high spatial and spectral dimensions and nd applications, for example,
in land cover classi cation, urban planning and management, security and food
processing. Unlike conventional three bands RGB images, their high
dimensional data space creates a challenge for traditional image processing
techniques which are usually based on the assumption that there exists
su cient training samples in order to increase the likelihood of high
classi cation accuracy. However, the high cost and di culty of obtaining
ground truth of hyperspectral data sets makes this assumption unrealistic and
necessitates the introduction of alternative methods for their processing.
Several techniques have been developed in the exploration of the rich spectral
and spatial information in HSIs. Speci cally, feature extraction (FE)
techniques are introduced in the processing of HSIs as a necessary step before
classi cation. They are aimed at transforming the high dimensional data of the
HSI into one of a lower dimension while retaining as much spatial and/or
spectral information as possible. In this research, we develop semi-supervised
FE techniques which combine features of supervised and unsupervised
techniques into a single framework for the processing of HSIs. Firstly, we
developed a feature extraction algorithm known as Semi-Supervised Linear
Embedding (SSLE) for the extraction of features in HSI. The algorithm
combines supervised Linear Discriminant Analysis (LDA) and unsupervised
Local Linear Embedding (LLE) to enhance class discrimination while also
preserving the properties of classes of interest. The technique was developed
based on the fact that LDA extracts features from HSIs by discriminating
between classes of interest and it can only extract C 1 features provided there
are C classes in the image by extracting features that are equivalent to the
number of classes in the HSI. Experiments show that the SSLE algorithm
overcomes the limitation of LDA and extracts features that are equivalent to
ii
iii
the number of classes in HSIs. Secondly, a graphical manifold dimension
reduction (DR) algorithm known as Graph Clustered Discriminant Analysis
(GCDA) is developed. The algorithm is developed to dynamically select labeled
samples from the pool of available unlabeled samples in order to complement
the few available label samples in HSIs. The selection is achieved by entwining
K-means clustering with a semi-supervised manifold discriminant analysis.
Using two HSI data sets, experimental results show that GCDA extracts
features that are equivalent to the number of classes with high classi cation
accuracy when compared with other state-of-the-art techniques. Furthermore,
we develop a window-based partitioning approach to preserve the spatial
properties of HSIs when their features are being extracted. In this approach,
the HSI is partitioned along its spatial dimension into n windows and the
covariance matrices of each window are computed. The covariance matrices of
the windows are then merged into a single matrix through using the Kalman
ltering approach so that the resulting covariance matrix may be used for
dimension reduction. Experiments show that the windowing approach achieves
high classi cation accuracy and preserves the spatial properties of HSIs. For
the proposed feature extraction techniques, Support Vector Machine (SVM)
and Neural Networks (NN) classi cation techniques are employed and their
performances are compared for these two classi ers. The performances of all
proposed FE techniques have also been shown to outperform other
state-of-the-art approaches
Spectral-spatial classification of hyperspectral images: three tricks and a new supervised learning setting
Spectral-spatial classification of hyperspectral images has been the subject
of many studies in recent years. In the presence of only very few labeled
pixels, this task becomes challenging. In this paper we address the following
two research questions: 1) Can a simple neural network with just a single
hidden layer achieve state of the art performance in the presence of few
labeled pixels? 2) How is the performance of hyperspectral image classification
methods affected when using disjoint train and test sets? We give a positive
answer to the first question by using three tricks within a very basic shallow
Convolutional Neural Network (CNN) architecture: a tailored loss function, and
smooth- and label-based data augmentation. The tailored loss function enforces
that neighborhood wavelengths have similar contributions to the features
generated during training. A new label-based technique here proposed favors
selection of pixels in smaller classes, which is beneficial in the presence of
very few labeled pixels and skewed class distributions. To address the second
question, we introduce a new sampling procedure to generate disjoint train and
test set. Then the train set is used to obtain the CNN model, which is then
applied to pixels in the test set to estimate their labels. We assess the
efficacy of the simple neural network method on five publicly available
hyperspectral images. On these images our method significantly outperforms
considered baselines. Notably, with just 1% of labeled pixels per class, on
these datasets our method achieves an accuracy that goes from 86.42%
(challenging dataset) to 99.52% (easy dataset). Furthermore we show that the
simple neural network method improves over other baselines in the new
challenging supervised setting. Our analysis substantiates the highly
beneficial effect of using the entire image (so train and test data) for
constructing a model.Comment: Remote Sensing 201
- …