74,822 research outputs found

    A new kernel method for hyperspectral image feature extraction

    Get PDF
    Hyperspectral image provides abundant spectral information for remote discrimination of subtle differences in ground covers. However, the increasing spectral dimensions, as well as the information redundancy, make the analysis and interpretation of hyperspectral images a challenge. Feature extraction is a very important step for hyperspectral image processing. Feature extraction methods aim at reducing the dimension of data, while preserving as much information as possible. Particularly, nonlinear feature extraction methods (e.g. kernel minimum noise fraction (KMNF) transformation) have been reported to benefit many applications of hyperspectral remote sensing, due to their good preservation of high-order structures of the original data. However, conventional KMNF or its extensions have some limitations on noise fraction estimation during the feature extraction, and this leads to poor performances for post-applications. This paper proposes a novel nonlinear feature extraction method for hyperspectral images. Instead of estimating noise fraction by the nearest neighborhood information (within a sliding window), the proposed method explores the use of image segmentation. The approach benefits both noise fraction estimation and information preservation, and enables a significant improvement for classification. Experimental results on two real hyperspectral images demonstrate the efficiency of the proposed method. Compared to conventional KMNF, the improvements of the method on two hyperspectral image classification are 8 and 11%. This nonlinear feature extraction method can be also applied to other disciplines where high-dimensional data analysis is required

    Rotor fault classification technique and precision analysis with kernel principal component analysis and multi-support vector machines

    Get PDF
    To solve the diagnosis problem of fault classification for aero-engine vibration over standard during test, a fault diagnosis classification approach based on kernel principal component analysis (KPCA) feature extraction and multi-support vector machines (SVM) is proposed, which extracted the feature of testing cell standard fault samples through exhausting the capability of nonlinear feature extraction of KPCA. By computing inner product kernel functions of original feature space, the vibration signal of rotor is transformed from principal low dimensional feature space to high dimensional feature spaces by this nonlinear map. Then, the nonlinear principal components of original low dimensional space are obtained by performing PCA on the high dimensional feature spaces. During muti-SVM training period, as eigenvectors, the nonlinear principal components are separated into training set and test set, and penalty parameter and kernel function parameter are optimized by adopting genetic optimization algorithm. A high classification accuracy of training set and test set is sustained and over-fitting and under-fitting are avoided. Experiment results indicate that this method has good performance in distinguishing different aero-engine fault mode, and is suitable for fault recognition of a high speed rotor

    Artificial immune recognition system with nonlinear resource allocation method and application to traditional Malay music genre classification

    Get PDF
    Artificial Immune Recognition System (AIRS) has shown an effective performance on several machine learning problems. In this study, the resource allocation method of AIRS was changed with a nonlinear method. This new algorithm, AIRS with nonlinear resource allocation method, was used as a classifier in Traditional Malay Music (TMM) genre classification. Music genre classification has a great important role in music information retrieval systems nowadays. The proposed system consists of three stages: feature extraction, feature selection and finally using proposed algorithm as a classifier. Based on results of conducted experiments, the obtained classification accuracy of proposed system is 88.6 % using 10 fold cross validation for TMM genre classification. The results also show that AIRS with nonlinear allocation method obtains maximum classification accuracy for TMM genre classification

    Optimized kernel minimum noise fraction transformation for hyperspectral image classification

    Get PDF
    This paper presents an optimized kernel minimum noise fraction transformation (OKMNF) for feature extraction of hyperspectral imagery. The proposed approach is based on the kernel minimum noise fraction (KMNF) transformation, which is a nonlinear dimensionality reduction method. KMNF can map the original data into a higher dimensional feature space and provide a small number of quality features for classification and some other post processing. Noise estimation is an important component in KMNF. It is often estimated based on a strong relationship between adjacent pixels. However, hyperspectral images have limited spatial resolution and usually have a large number of mixed pixels, which make the spatial information less reliable for noise estimation. It is the main reason that KMNF generally shows unstable performance in feature extraction for classification. To overcome this problem, this paper exploits the use of a more accurate noise estimation method to improve KMNF. We propose two new noise estimation methods accurately. Moreover, we also propose a framework to improve noise estimation, where both spectral and spatial de-correlation are exploited. Experimental results, conducted using a variety of hyperspectral images, indicate that the proposed OKMNF is superior to some other related dimensionality reduction methods in most cases. Compared to the conventional KMNF, the proposed OKMNF benefits significant improvements in overall classification accuracy

    A Subspace Projection Methodology for Nonlinear Manifold Based Face Recognition

    Get PDF
    A novel feature extraction method that utilizes nonlinear mapping from the original data space to the feature space is presented in this dissertation. Feature extraction methods aim to find compact representations of data that are easy to classify. Measurements with similar values are grouped to same category, while those with differing values are deemed to be of separate categories. For most practical systems, the meaningful features of a pattern class lie in a low dimensional nonlinear constraint region (manifold) within the high dimensional data space. A learning algorithm to model this nonlinear region and to project patterns to this feature space is developed. Least squares estimation approach that utilizes interdependency between points in training patterns is used to form the nonlinear region. The proposed feature extraction strategy is employed to improve face recognition accuracy under varying illumination conditions and facial expressions. Though the face features show variations under these conditions, the features of one individual tend to cluster together and can be considered as a neighborhood. Low dimensional representations of face patterns in the feature space may lie in a nonlinear constraint region, which when modeled leads to efficient pattern classification. A feature space encompassing multiple pattern classes can be trained by modeling a separate constraint region for each pattern class and obtaining a mean constraint region by averaging all the individual regions. Unlike most other nonlinear techniques, the proposed method provides an easy intuitive way to place new points onto a nonlinear region in the feature space. The proposed feature extraction and classification method results in improved accuracy when compared to the classical linear representations. Face recognition accuracy is further improved by introducing the concepts of modularity, discriminant analysis and phase congruency into the proposed method. In the modular approach, feature components are extracted from different sub-modules of the images and concatenated to make a single vector to represent a face region. By doing this we are able to extract features that are more representative of the local features of the face. When projected onto an arbitrary line, samples from well formed clusters could produce a confused mixture of samples from all the classes leading to poor recognition. Discriminant analysis aims to find an optimal line orientation for which the data classes are well separated. Experiments performed on various databases to evaluate the performance of the proposed face recognition technique have shown improvement in recognition accuracy, especially under varying illumination conditions and facial expressions. This shows that the integration of multiple subspaces, each representing a part of a higher order nonlinear function, could represent a pattern with variability. Research work is progressing to investigate the effectiveness of subspace projection methodology for building manifolds with other nonlinear functions and to identify the optimum nonlinear function from an object classification perspective

    Evaluation of time-domain features for motor imagery movements using FCM and SVM

    Get PDF
    Brain–Machine Interface is a direct communication pathway between brain and an external electronic device. BMIs aim to translate brain activities into control commands. To design a system that translates brain waves and its activities to desired commands, motor imagery tasks classification is the core part. Classification accuracy not only depends on how capable the classifier is but also it is about the input data. Feature extraction is to highlight the properties of signal that make it distinct from the signal of the other mental tasks. Performance of BMIs directly depends on the effectiveness of the feature extraction and classification algorithms. If a feature provides large interclass difference for different classes, the applied classifier exhibits a better performance. In order to attain less computational complexity, five timedomain procedure, namely: Mean Absolute Value, Maximum peak value, Simple Square Integral, Willison Amplitude, and Waveform Length are used for feature extraction of EEG signals. Two classifiers are applied to assess the performance of each feature-subject. SVM with polynomial kernel is one of the applied nonlinear classifier and supervised FCM is the other one. The performance of each feature for input data are evaluated with both classifiers and classification accuracy is the considered common comparison parameter

    Nonlinear feature extraction through manifold learning in an electronic tongue classification task

    Get PDF
    A nonlinear feature extraction-based approach using manifold learning algorithms is developed in order to improve the classification accuracy in an electronic tongue sensor array. The developed signal processing methodology is composed of four stages: data unfolding, scaling, feature extraction, and classification. This study aims to compare seven manifold learning algorithms: Isomap, Laplacian Eigenmaps, Locally Linear Embedding (LLE), modified LLE, Hessian LLE, Local Tangent Space Alignment (LTSA), and t-Distributed Stochastic Neighbor Embedding (t-SNE) to find the best classification accuracy in a multifrequency large-amplitude pulse voltammetry electronic tongue. A sensitivity study of the parameters of each manifold learning algorithm is also included. A data set of seven different aqueous matrices is used to validate the proposed data processing methodology. A leave-one-out cross validation was employed in 63 samples. The best accuracy (96.83%) was obtained when the methodology uses Mean-Centered Group Scaling (MCGS) for data normalization, the t-SNE algorithm for feature extraction, and k-nearest neighbors (kNN) as classifier.Peer ReviewedPostprint (published version

    Extended pipeline for content-based feature engineering in music genre recognition

    Full text link
    We present a feature engineering pipeline for the construction of musical signal characteristics, to be used for the design of a supervised model for musical genre identification. The key idea is to extend the traditional two-step process of extraction and classification with additive stand-alone phases which are no longer organized in a waterfall scheme. The whole system is realized by traversing backtrack arrows and cycles between various stages. In order to give a compact and effective representation of the features, the standard early temporal integration is combined with other selection and extraction phases: on the one hand, the selection of the most meaningful characteristics based on information gain, and on the other hand, the inclusion of the nonlinear correlation between this subset of features, determined by an autoencoder. The results of the experiments conducted on GTZAN dataset reveal a noticeable contribution of this methodology towards the model's performance in classification task.Comment: ICASSP 201
    corecore