6,672 research outputs found

    Multiple Data-Dependent Kernel Fisher Discriminant Analysis for Face Recognition

    Get PDF
    Kernel Fisher discriminant analysis (KFDA) method has demonstrated its success in extracting facial features for face recognition. Compared to linear techniques, it can better describe the complex and nonlinear variations of face images. However, a single kernel is not always suitable for the applications of face recognition which contain data from multiple, heterogeneous sources, such as face images under huge variations of pose, illumination, and facial expression. To improve the performance of KFDA in face recognition, a novel algorithm named multiple data-dependent kernel Fisher discriminant analysis (MDKFDA) is proposed in this paper. The constructed multiple data-dependent kernel (MDK) is a combination of several base kernels with a data-dependent kernel constraint on their weights. By solving the optimization equation based on Fisher criterion and maximizing the margin criterion, the parameter optimization of data-dependent kernel and multiple base kernels is achieved. Experimental results on the three face databases validate the effectiveness of the proposed algorithm

    Regularized kernel discriminant analysis with a robust kernel for face recognition and verification

    Get PDF
    We propose a robust approach to discriminant kernel-based feature extraction for face recognition and verification. We show, for the first time, how to perform the eigen analysis of the within-class scatter matrix directly in the feature space. This eigen analysis provides the eigenspectrum of its range space and the corresponding eigenvectors as well as the eigenvectors spanning its null space. Based on our analysis, we propose a kernel discriminant analysis (KDA) which combines eigenspectrum regularization with a feature-level scheme (ER-KDA). Finally, we combine the proposed ER-KDA with a nonlinear robust kernel particularly suitable for face recognition/verification applications which require robustness against outliers caused by occlusions and illumination changes. We applied the proposed framework to several popular databases (Yale, AR, XM2VTS) and achieved state-of-the-art performance for most of our experiments

    Semi-Supervised Discriminant Analysis Using Robust Path-Based Similarity

    Get PDF
    Linear Discriminant Analysis (LDA), which works by maximizing the within-class similarity and minimizing the between-class similarity simultaneously, is a popular dimensionality reduction technique in pattern recognition and machine learning. In real-world applications when labeled data are limited, LDA does not work well. Under many situations, however, it is easy to obtain unlabeled data in large quantities. In this paper, we propose a novel dimensionality reduction method, called Semi-Supervised Discriminant Analysis (SSDA), which can utilize both labeled and unlabeled data to perform dimensionality reduction in the semisupervised setting. Our method uses a robust path-based similarity measure to capture the manifold structure of the data and then uses the obtained similarity to maximize the separability between different classes. A kernel extension of the proposed method for nonlinear dimensionality reduction in the semi-supervised setting is also presented. Experiments on face recognition demonstrate the effectiveness of the proposed method. 1

    Kernel-based nonlinear discriminant analysis using minimum squared errors criterion for multiclass and undersampled problems

    Get PDF
    It is well known that there exist two fundamental limitations in the linear discriminant analysis (LDA). One is that it cannot be applied when the within-class scatter matrix is singular, which is caused by the undersampled problem. The other is that it lacks the capability to capture the nonlinearly clustered structure of the data due to its linear nature. In this paper, a new kernel-based nonlinear discriminant analysis algorithm using minimum squared errors criterion (KDA-MSE) is proposed to solve these two problems. After mapping the original data into a higher-dimensional feature space using kernel function, the MSE criterion is used as the discriminant rule and the corresponding dimension reducing transformation is derived. Since the MSE solution does not require the scatter matrices to be nonsingular, the proposed KDA-MSE algorithm is applicable to the undersampled problem. Moreover, the new KDA-MSE algorithm can be applied to multiclass problem, whereas the existing MSE-based kernel discriminant methods are limited to handle twoclass data only. Extensive experiments, including object recognition and face recognition on three benchmark databases, are performed and the results demonstrate that our algorithm is competitive in comparison with other kernel-based discriminant techniques in terms of recognition accuracy. (C) 2009 Elsevier B.V. All rights reserved.National Natural Science Foundation of China [60672046, 60675002]; Fujian Province Science and Technology Foundation [2008H0036]; Specialized Research Fund for the Doctorol Program of Higher Educatio

    Neural Class-Specific Regression for face verification

    Get PDF
    Face verification is a problem approached in the literature mainly using nonlinear class-specific subspace learning techniques. While it has been shown that kernel-based Class-Specific Discriminant Analysis is able to provide excellent performance in small- and medium-scale face verification problems, its application in today's large-scale problems is difficult due to its training space and computational requirements. In this paper, generalizing our previous work on kernel-based class-specific discriminant analysis, we show that class-specific subspace learning can be cast as a regression problem. This allows us to derive linear, (reduced) kernel and neural network-based class-specific discriminant analysis methods using efficient batch and/or iterative training schemes, suited for large-scale learning problems. We test the performance of these methods in two datasets describing medium- and large-scale face verification problems.Comment: 9 pages, 4 figure

    Direct kernel biased discriminant analysis: a new content-based image retrieval relevance feedback algorithm

    Get PDF
    In recent years, a variety of relevance feedback (RF) schemes have been developed to improve the performance of content-based image retrieval (CBIR). Given user feedback information, the key to a RF scheme is how to select a subset of image features to construct a suitable dissimilarity measure. Among various RF schemes, biased discriminant analysis (BDA) based RF is one of the most promising. It is based on the observation that all positive samples are alike, while in general each negative sample is negative in its own way. However, to use BDA, the small sample size (SSS) problem is a big challenge, as users tend to give a small number of feedback samples. To explore solutions to this issue, this paper proposes a direct kernel BDA (DKBDA), which is less sensitive to SSS. An incremental DKBDA (IDKBDA) is also developed to speed up the analysis. Experimental results are reported on a real-world image collection to demonstrate that the proposed methods outperform the traditional kernel BDA (KBDA) and the support vector machine (SVM) based RF algorithms

    Nonlinear Supervised Dimensionality Reduction via Smooth Regular Embeddings

    Full text link
    The recovery of the intrinsic geometric structures of data collections is an important problem in data analysis. Supervised extensions of several manifold learning approaches have been proposed in the recent years. Meanwhile, existing methods primarily focus on the embedding of the training data, and the generalization of the embedding to initially unseen test data is rather ignored. In this work, we build on recent theoretical results on the generalization performance of supervised manifold learning algorithms. Motivated by these performance bounds, we propose a supervised manifold learning method that computes a nonlinear embedding while constructing a smooth and regular interpolation function that extends the embedding to the whole data space in order to achieve satisfactory generalization. The embedding and the interpolator are jointly learnt such that the Lipschitz regularity of the interpolator is imposed while ensuring the separation between different classes. Experimental results on several image data sets show that the proposed method outperforms traditional classifiers and the supervised dimensionality reduction algorithms in comparison in terms of classification accuracy in most settings
    corecore