318 research outputs found

    Unsupervised spectral sub-feature learning for hyperspectral image classification

    Get PDF
    Spectral pixel classification is one of the principal techniques used in hyperspectral image (HSI) analysis. In this article, we propose an unsupervised feature learning method for classification of hyperspectral images. The proposed method learns a dictionary of sub-feature basis representations from the spectral domain, which allows effective use of the correlated spectral data. The learned dictionary is then used in encoding convolutional samples from the hyperspectral input pixels to an expanded but sparse feature space. Expanded hyperspectral feature representations enable linear separation between object classes present in an image. To evaluate the proposed method, we performed experiments on several commonly used HSI data sets acquired at different locations and by different sensors. Our experimental results show that the proposed method outperforms other pixel-wise classification methods that make use of unsupervised feature extraction approaches. Additionally, even though our approach does not use any prior knowledge, or labelled training data to learn features, it yields either advantageous, or comparable, results in terms of classification accuracy with respect to recent semi-supervised methods

    Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images

    Full text link
    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation hasn't efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address the these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient

    KCRC-LCD: Discriminative Kernel Collaborative Representation with Locality Constrained Dictionary for Visual Categorization

    Full text link
    We consider the image classification problem via kernel collaborative representation classification with locality constrained dictionary (KCRC-LCD). Specifically, we propose a kernel collaborative representation classification (KCRC) approach in which kernel method is used to improve the discrimination ability of collaborative representation classification (CRC). We then measure the similarities between the query and atoms in the global dictionary in order to construct a locality constrained dictionary (LCD) for KCRC. In addition, we discuss several similarity measure approaches in LCD and further present a simple yet effective unified similarity measure whose superiority is validated in experiments. There are several appealing aspects associated with LCD. First, LCD can be nicely incorporated under the framework of KCRC. The LCD similarity measure can be kernelized under KCRC, which theoretically links CRC and LCD under the kernel method. Second, KCRC-LCD becomes more scalable to both the training set size and the feature dimension. Example shows that KCRC is able to perfectly classify data with certain distribution, while conventional CRC fails completely. Comprehensive experiments on many public datasets also show that KCRC-LCD is a robust discriminative classifier with both excellent performance and good scalability, being comparable or outperforming many other state-of-the-art approaches

    Feature extraction and fusion for classification of remote sensing imagery

    Get PDF

    Optimized kernel minimum noise fraction transformation for hyperspectral image classification

    Get PDF
    This paper presents an optimized kernel minimum noise fraction transformation (OKMNF) for feature extraction of hyperspectral imagery. The proposed approach is based on the kernel minimum noise fraction (KMNF) transformation, which is a nonlinear dimensionality reduction method. KMNF can map the original data into a higher dimensional feature space and provide a small number of quality features for classification and some other post processing. Noise estimation is an important component in KMNF. It is often estimated based on a strong relationship between adjacent pixels. However, hyperspectral images have limited spatial resolution and usually have a large number of mixed pixels, which make the spatial information less reliable for noise estimation. It is the main reason that KMNF generally shows unstable performance in feature extraction for classification. To overcome this problem, this paper exploits the use of a more accurate noise estimation method to improve KMNF. We propose two new noise estimation methods accurately. Moreover, we also propose a framework to improve noise estimation, where both spectral and spatial de-correlation are exploited. Experimental results, conducted using a variety of hyperspectral images, indicate that the proposed OKMNF is superior to some other related dimensionality reduction methods in most cases. Compared to the conventional KMNF, the proposed OKMNF benefits significant improvements in overall classification accuracy

    Investigation of feature extraction algorithms and techniques for hyperspectral images.

    Get PDF
    Doctor of Philosophy (Computer Engineering). University of KwaZulu-Natal. Durban, 2017.Hyperspectral images (HSIs) are remote-sensed images that are characterized by very high spatial and spectral dimensions and nd applications, for example, in land cover classi cation, urban planning and management, security and food processing. Unlike conventional three bands RGB images, their high dimensional data space creates a challenge for traditional image processing techniques which are usually based on the assumption that there exists su cient training samples in order to increase the likelihood of high classi cation accuracy. However, the high cost and di culty of obtaining ground truth of hyperspectral data sets makes this assumption unrealistic and necessitates the introduction of alternative methods for their processing. Several techniques have been developed in the exploration of the rich spectral and spatial information in HSIs. Speci cally, feature extraction (FE) techniques are introduced in the processing of HSIs as a necessary step before classi cation. They are aimed at transforming the high dimensional data of the HSI into one of a lower dimension while retaining as much spatial and/or spectral information as possible. In this research, we develop semi-supervised FE techniques which combine features of supervised and unsupervised techniques into a single framework for the processing of HSIs. Firstly, we developed a feature extraction algorithm known as Semi-Supervised Linear Embedding (SSLE) for the extraction of features in HSI. The algorithm combines supervised Linear Discriminant Analysis (LDA) and unsupervised Local Linear Embedding (LLE) to enhance class discrimination while also preserving the properties of classes of interest. The technique was developed based on the fact that LDA extracts features from HSIs by discriminating between classes of interest and it can only extract C 1 features provided there are C classes in the image by extracting features that are equivalent to the number of classes in the HSI. Experiments show that the SSLE algorithm overcomes the limitation of LDA and extracts features that are equivalent to ii iii the number of classes in HSIs. Secondly, a graphical manifold dimension reduction (DR) algorithm known as Graph Clustered Discriminant Analysis (GCDA) is developed. The algorithm is developed to dynamically select labeled samples from the pool of available unlabeled samples in order to complement the few available label samples in HSIs. The selection is achieved by entwining K-means clustering with a semi-supervised manifold discriminant analysis. Using two HSI data sets, experimental results show that GCDA extracts features that are equivalent to the number of classes with high classi cation accuracy when compared with other state-of-the-art techniques. Furthermore, we develop a window-based partitioning approach to preserve the spatial properties of HSIs when their features are being extracted. In this approach, the HSI is partitioned along its spatial dimension into n windows and the covariance matrices of each window are computed. The covariance matrices of the windows are then merged into a single matrix through using the Kalman ltering approach so that the resulting covariance matrix may be used for dimension reduction. Experiments show that the windowing approach achieves high classi cation accuracy and preserves the spatial properties of HSIs. For the proposed feature extraction techniques, Support Vector Machine (SVM) and Neural Networks (NN) classi cation techniques are employed and their performances are compared for these two classi ers. The performances of all proposed FE techniques have also been shown to outperform other state-of-the-art approaches

    Differential response to acute low dose radiation in primary and immortalized endothelial cells

    Get PDF
    Purpose : The low dose radiation response of primary human umbilical vein endothelial cells (HUVEC) and its immortalized derivative, the EA.hy926 cell line, was evaluated and compared. Material and methods: DNA damage and repair, cell cycle progression, apoptosis and cellular morphology in HUVEC and EA.hy926 were evaluated after exposure to low (0.05-0.5 Gy) and high doses (2 and 5 Gy) of acute X-rays. Results : Subtle, but significant increases in DNA double-strand breaks (DSB) were observed in HUVEC and EA.hy926 30 min after low dose irradiation (0.05 Gy). Compared to high dose irradiation (2 Gy), relatively more DSB/Gy were formed after low dose irradiation. Also, we observed a dose-dependent increase in apoptotic cells, down to 0.5 Gy in HUVEC and 0.1 Gy in EA.hy926 cells. Furthermore, radiation induced significantly more apoptosis in EA.hy926 compared to HUVEC. Conclusions : We demonstrated for the first time that acute low doses of X-rays induce DNA damage and apoptosis in endothelial cells. Our results point to a non-linear dose-response relationship for DSB formation in endothelial cells. Furthermore, the observed difference in radiation-induced apoptosis points to a higher radiosensitivity of EA.hy926 compared to HUVEC, which should be taken into account when using these cells as models for studying the endothelium radiation response
    corecore