412 research outputs found

    Two-Dimensional Heteroscedastic Feature Extraction Technique for Face Recognition

    Get PDF
    One limitation of vector-based LDA and its matrix-based extension is that they cannot deal with heteroscedastic data. In this paper, we present a novel two-dimensional feature extraction technique for face recognition which is capable of handling the heteroscedastic data in the dataset. The technique is a general form of two-dimensional linear discriminant analysis. It generalizes the interclass scatter matrix of two-dimensional LDA by applying the Chernoff distance as a measure of separation of every pair of clusters with the same index in different classes. By employing the new distance, our method can capture the discriminatory information presented in the difference of covariance matrices of different clusters in the datasets while preserving the computational simplicity of eigenvalue-based techniques. So our approach is a proper technique for high-dimensional applications such as face recognition. Experimental results on CMU-PIE, AR and AT & T face databases demonstrate the effectiveness of our method in term of classification accuracy

    Dimension Reduction by Mutual Information Discriminant Analysis

    Get PDF
    In the past few decades, researchers have proposed many discriminant analysis (DA) algorithms for the study of high-dimensional data in a variety of problems. Most DA algorithms for feature extraction are based on transformations that simultaneously maximize the between-class scatter and minimize the withinclass scatter matrices. This paper presents a novel DA algorithm for feature extraction using mutual information (MI). However, it is not always easy to obtain an accurate estimation for high-dimensional MI. In this paper, we propose an efficient method for feature extraction that is based on one-dimensional MI estimations. We will refer to this algorithm as mutual information discriminant analysis (MIDA). The performance of this proposed method was evaluated using UCI databases. The results indicate that MIDA provides robust performance over different data sets with different characteristics and that MIDA always performs better than, or at least comparable to, the best performing algorithms.Comment: 13pages, 3 tables, International Journal of Artificial Intelligence & Application

    Linear classifier design under heteroscedasticity in Linear Discriminant Analysis

    Get PDF
    Under normality and homoscedasticity assumptions, Linear Discriminant Analysis (LDA) is known to be optimal in terms of minimising the Bayes error for binary classification. In the heteroscedastic case, LDA is not guaranteed to minimise this error. Assuming heteroscedasticity, we derive a linear classifier, the Gaussian Linear Discriminant (GLD), that directly minimises the Bayes error for binary classification. In addition, we also propose a local neighbourhood search (LNS) algorithm to obtain a more robust classifier if the data is known to have a non-normal distribution. We evaluate the proposed classifiers on two artificial and ten real-world datasets that cut across a wide range of application areas including handwriting recognition, medical diagnosis and remote sensing, and then compare our algorithm against existing LDA approaches and other linear classifiers. The GLD is shown to outperform the original LDA procedure in terms of the classification accuracy under heteroscedasticity. While it compares favourably with other existing heteroscedastic LDA approaches, the GLD requires as much as 60 times lower training time on some datasets. Our comparison with the support vector machine (SVM) also shows that, the GLD, together with the LNS, requires as much as 150 times lower training time to achieve an equivalent classification accuracy on some of the datasets. Thus, our algorithms can provide a cheap and reliable option for classification in a lot of expert systems

    Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction

    Full text link
    It is difficult to find the optimal sparse solution of a manifold learning based dimensionality reduction algorithm. The lasso or the elastic net penalized manifold learning based dimensionality reduction is not directly a lasso penalized least square problem and thus the least angle regression (LARS) (Efron et al. \cite{LARS}), one of the most popular algorithms in sparse learning, cannot be applied. Therefore, most current approaches take indirect ways or have strict settings, which can be inconvenient for applications. In this paper, we proposed the manifold elastic net or MEN for short. MEN incorporates the merits of both the manifold learning based dimensionality reduction and the sparse learning based dimensionality reduction. By using a series of equivalent transformations, we show MEN is equivalent to the lasso penalized least square problem and thus LARS is adopted to obtain the optimal sparse solution of MEN. In particular, MEN has the following advantages for subsequent classification: 1) the local geometry of samples is well preserved for low dimensional data representation, 2) both the margin maximization and the classification error minimization are considered for sparse projection calculation, 3) the projection matrix of MEN improves the parsimony in computation, 4) the elastic net penalty reduces the over-fitting problem, and 5) the projection matrix of MEN can be interpreted psychologically and physiologically. Experimental evidence on face recognition over various popular datasets suggests that MEN is superior to top level dimensionality reduction algorithms.Comment: 33 pages, 12 figure

    Chernoff Dimensionality Reduction-Where Fisher Meets FKT

    Get PDF
    Well known linear discriminant analysis (LDA) based on the Fisher criterion is incapable of dealing with heteroscedasticity in data. However, in many practical applications we often encounter heteroscedastic data, i.e., within-class scatter matrices can not be expected to be equal. A technique based on the Chernoff criterion for linear dimensionality reduction has been proposed recently. The technique extends well-known Fisher\u27s LDA and is capable of exploiting information about heteroscedasticity in the data. While the Chernoff criterion has been shown to outperform the Fisher\u27s, a clear understanding of its exact behavior is lacking. In addition, the criterion, as introduced, is rather complex, making it difficult to clearly state its relationship to other linear dimensionality reduction techniques. In this paper, we show precisely what can be expected from the Chernoff criterion and its relations to the Fisher criterion and Fukunaga-Koontz transform. Furthermore, we show that a recently proposed decomposition of the data space into four subspaces is incomplete. We provide arguments on how to best enrich the decomposition of the data space in order to account for heteroscedasticity in the data. Finally, we provide experimental results validating our theoretical analysis

    Target differentiation with simple infrared sensors using statistical pattern recognition techniques

    Get PDF
    Cataloged from PDF version of article.This study compares the performances of various statistical pattern recognition techniques for the differentiation of commonly encountered features in indoor environments, possibly with different surface properties, using simple infrared (IR) sensors. The intensity measurements obtained from such sensors are highly dependent on the location, geometry, and surface properties of the reflecting feature in a way that cannot be represented by a simple analytical relationship, therefore complicating the differentiation process. We construct feature vectors based on the parameters of angular IR intensity scans from different targets to determine their geometry and/or surface type. Mixture of normals classifier with three components correctly differentiates three types of geometries with different surface properties, resulting in the best performance (100%) in geometry differentiation. Parametric differentiation correctly identifies six different surface types of the same planar geometry, resulting in the best surface differentiation rate (100%). However, this rate is not maintained with the inclusion of more surfaces. The results indicate that the geometrical properties of the targets are more distinctive than their surface properties, and surface recognition is the limiting factor in differentiation. The results demonstrate that simple IR sensors, when coupled with appropriate processing and recognition techniques, can be used to extract substantially more information than such devices are commonly employed for. (C) 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserve

    Mental state estimation for brain-computer interfaces

    Get PDF
    Mental state estimation is potentially useful for the development of asynchronous brain-computer interfaces. In this study, four mental states have been identified and decoded from the electrocorticograms (ECoGs) of six epileptic patients, engaged in a memory reach task. A novel signal analysis technique has been applied to high-dimensional, statistically sparse ECoGs recorded by a large number of electrodes. The strength of the proposed technique lies in its ability to jointly extract spatial and temporal patterns, responsible for encoding mental state differences. As such, the technique offers a systematic way of analyzing the spatiotemporal aspects of brain information processing and may be applicable to a wide range of spatiotemporal neurophysiological signals
    • …
    corecore