51 research outputs found

    Asymptotic Generalization Bound of Fisher's Linear Discriminant Analysis

    Full text link
    Fisher's linear discriminant analysis (FLDA) is an important dimension reduction method in statistical pattern recognition. It has been shown that FLDA is asymptotically Bayes optimal under the homoscedastic Gaussian assumption. However, this classical result has the following two major limitations: 1) it holds only for a fixed dimensionality DD, and thus does not apply when DD and the training sample size NN are proportionally large; 2) it does not provide a quantitative description on how the generalization ability of FLDA is affected by DD and NN. In this paper, we present an asymptotic generalization analysis of FLDA based on random matrix theory, in a setting where both DD and NN increase and D/Nγ[0,1)D/N\longrightarrow\gamma\in[0,1). The obtained lower bound of the generalization discrimination power overcomes both limitations of the classical result, i.e., it is applicable when DD and NN are proportionally large and provides a quantitative description of the generalization ability of FLDA in terms of the ratio γ=D/N\gamma=D/N and the population discrimination power. Besides, the discrimination power bound also leads to an upper bound on the generalization error of binary-classification with FLDA

    Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person

    Full text link
    Face recognition (FR) with a single training sample per person (STSPP) is a very challenging problem due to the lack of information to predict the variations in the query sample. Sparse representation based classification has shown interesting results in robust FR, however, its performance will deteriorate much for FR with STSPP. To address this issue, in this paper we learn a sparse variation dictionary from a generic training set to improve the query sample representation by STSPP. Instead of learning from the generic training set independently w.r.t. the gallery set, the proposed sparse variation dictionary learning (SVDL) method is adaptive to the gallery set by jointly learning a projection to connect the generic training set with the gallery set. The learnt sparse variation dictionary can be easily integrated into the framework of sparse representation based classification so that various variations in face images, including illumination, expression, occlusion, pose, etc., can be better handled. Experiments on the large-scale CMU Multi-PIE, FRGC and LFW databases demonstrate the promising performance of SVDL on FR with STSPP.Department of ComputingRefereed conference pape

    An Empirical Study for PCA- and LDA-Based Feature Reduction for Gas Identification

    Get PDF
    Abstract: Increasing the number of sensors in a gas identification system generally improves its performance as this will add extra features for analysis. However, this affects the computational complexity, especially if the identification algorithm is to be implemented on a hardware platform. Therefore, feature reduction is required to extract the most important information from the sensors for processing. In this paper, linear discriminant analysis (LDA) and principal component analysis (PCA)-based feature reduction algorithms have been analyzed using the data obtained from two different types of gas sensors, i.e., seven commercial Figaro sensors and in-house fabricated 4×4 tin-oxide gas array sensor. A decision tree-based classifier is used to examine the performance of both the PCA and LDA approaches. The software implementation is carried out in MATLAB and the hardware implementation is performed using the Zynq system-on-chip (SoC) platform. It has been found that with the 4×4 array sensor, two discriminant functions (DF) of LDA provide 3.3% better classification than five PCA components, while for the seven Figaro sensors, two principal components and one DF show the same performances. The hardware implementation results on the programmable logic of the Zynq SoC shows that LDA outperforms PCA by using 50% less resources as well as by being 11% faster with a maximum running frequency of 122 MHz

    An associate-predict model for face recognition

    Full text link
    corecore