737 research outputs found

    Extended Averaged Learning Subspace Method for Hyperspectral Data Classification

    Get PDF
    Averaged learning subspace methods (ALSM) have the advantage of being easily implemented and appear to outperform in classification problems of hyperspectral images. However, there remain some open and challenging problems, which if addressed, could further improve their performance in terms of classification accuracy. We carried out experiments mainly by using two kinds of improved subspace methods (namely, dynamic and fixed subspace methods), in conjunction with the [0,1] and [-1,+1] normalization methods. We used different performance indicators to support our experimental studies: classification accuracy, computation time, and the stability of the parameter settings. Results are presented for the AVIRIS Indian Pines data set. Experimental analysis showed that the fixed subspace method combined with the [0,1] normalization method yielded higher classification accuracy than other subspace methods. Moreover, ALSMs are easily applied: only two parameters need to be set, and they can be applied directly to hyperspectral data. In addition, they can completely identify training samples in a finite number of iterations

    Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods

    Full text link
    Feature extraction and dimensionality reduction are important tasks in many fields of science dealing with signal processing and analysis. The relevance of these techniques is increasing as current sensory devices are developed with ever higher resolution, and problems involving multimodal data sources become more common. A plethora of feature extraction methods are available in the literature collectively grouped under the field of Multivariate Analysis (MVA). This paper provides a uniform treatment of several methods: Principal Component Analysis (PCA), Partial Least Squares (PLS), Canonical Correlation Analysis (CCA) and Orthonormalized PLS (OPLS), as well as their non-linear extensions derived by means of the theory of reproducing kernel Hilbert spaces. We also review their connections to other methods for classification and statistical dependence estimation, and introduce some recent developments to deal with the extreme cases of large-scale and low-sized problems. To illustrate the wide applicability of these methods in both classification and regression problems, we analyze their performance in a benchmark of publicly available data sets, and pay special attention to specific real applications involving audio processing for music genre prediction and hyperspectral satellite images for Earth and climate monitoring

    Kernel discriminant analysis and clustering with parsimonious Gaussian process models

    Full text link
    This work presents a family of parsimonious Gaussian process models which allow to build, from a finite sample, a model-based classifier in an infinite dimensional space. The proposed parsimonious models are obtained by constraining the eigen-decomposition of the Gaussian processes modeling each class. This allows in particular to use non-linear mapping functions which project the observations into infinite dimensional spaces. It is also demonstrated that the building of the classifier can be directly done from the observation space through a kernel function. The proposed classification method is thus able to classify data of various types such as categorical data, functional data or networks. Furthermore, it is possible to classify mixed data by combining different kernels. The methodology is as well extended to the unsupervised classification case. Experimental results on various data sets demonstrate the effectiveness of the proposed method
    • …
    corecore