2 research outputs found

    Face Recognition Using Gabor-based Improved Supervised Locality Preserving Projections

    Get PDF
    A novel Gabor-based Improved Supervised Locality Preserving Projections for face recognition is presented in this paper. This new algorithm is based on a combination of Gabor wavelets representation of face images and Improved Supervised Locality Preserving Projections for face recognition and it is robust to changes in illumination and facial expressions and poses. In this paper, Gabor filter is first designed to extract the features from the whole face images, and then a supervised locality preserving projections, which is improved by two-directional 2DPCA to eliminate redundancy among Gabor features, is used to augment these Gabor feature vectors derived from Gabor wavelets representation. The new algorithm benefits mostly from two aspects: One aspect is that Gabor wavelets are promoted for their useful properties, such as invariance to illumination, rotation, scale and translations, in feature extraction. The other is that the Improved Supervised Locality Preserving Projections not only provides a category label for each class in a training set, but also reduces more coefficients for image representation from two directions and boost the recognition speed. Experiments based on the ORL face database demonstrate the effectiveness and efficiency of the new method. Results show that our new algorithm outperforms the other popular approaches reported in the literature and achieves a much higher accurate recognition rate

    Parallel Image Matrix Compression for Face Recognition

    No full text
    The canonical face recognition algorithm Eigenface and Fisherface are both based on one dimensional vector representation. However, with the high feature dimensions and the small training data, face recognition often suffers from the curse of dimension and the small sample problem. Recent research [4] shows that face recognition based on direct 2D matrix representation, i.e. 2DPCA, obtains better performance than that based on traditional vector representation. However, there are three questions left unresolved in the 2DPCA algorithm: 1) what is the meaning of the eigenvalue and eigenvector of the covariance matrix in 2DPCA; 2) why 2DPCA can outperform Eigenface; and 3) how to reduce the dimension after 2DPCA directly. In this paper, we analyze 2DPCA in a different view and proof that 2DPCA is actually a “localized” PCA with each row vector of an image as object. With this explanation, we discover the intrinsic reason that 2DPCA can outperform Eigenface is because fewer feature dimensions and more samples are used in 2DPCA, when compared with Eigenface. To further reduce the dimension after 2DPCA, a two-stage strategy, namely parallel image matrix compression (PIMC), is proposed to compress the image matrix redundancy, which exists among row vectors and column vectors. The exhaustive experiment results demonstrate that PIMC is superior to 2DPCA and Eigenface, and PIMC+LDA outperforms 2DPCA+LDA and Fisherface. 1
    corecore