348 research outputs found

    Face recognition in different subspaces - A comparative study

    Get PDF
    Face recognition is one of the most successful applications of image analysis and understanding and has gained much attention in recent years. Among many approaches to the problem of face recognition, appearance-based subspace analysis still gives the most promising results. In this paper we study the three most popular appearance-based face recognition projection methods (PCA, LDA and ICA). All methods are tested in equal working conditions regarding preprocessing and algorithm implementation on the FERET data set with its standard tests. We also compare the ICA method with its whitening preprocess and find out that there is no significant difference between them. When we compare different projection with different metrics we found out that the LDA+COS combination is the most promising for all tasks. The L1 metric gives the best results in combination with PCA and ICA1, and COS is superior to any other metric when used with LDA and ICA2. Our results are compared to other studies and some discrepancies are pointed ou

    Eigenvalue correction results in face recognition

    Get PDF
    Eigenvalues of sample covariance matrices are often used in biometrics. It has been known for several decades that even though the sample covariance matrix is an unbiased estimate of the real covariance matrix [Fukunaga,1990], the eigenvalues of the sample covariance matrix are biased estimates of the real eigenvalues [Silverstein,1986]. This bias is particularly dominant when the number of samples used for estimation is in the same order as the number of dimensions, as is often the case in biometrics. We investigate the effects of this bias on error rates in verification experiments and show that eigenvalue correction can improve recognition performance

    WDiscOOD: Out-of-Distribution Detection via Whitened Linear Discriminant Analysis

    Full text link
    Deep neural networks are susceptible to generating overconfident yet erroneous predictions when presented with data beyond known concepts. This challenge underscores the importance of detecting out-of-distribution (OOD) samples in the open world. In this work, we propose a novel feature-space OOD detection score based on class-specific and class-agnostic information. Specifically, the approach utilizes Whitened Linear Discriminant Analysis to project features into two subspaces - the discriminative and residual subspaces - for which the in-distribution (ID) classes are maximally separated and closely clustered, respectively. The OOD score is then determined by combining the deviation from the input data to the ID pattern in both subspaces. The efficacy of our method, named WDiscOOD, is verified on the large-scale ImageNet-1k benchmark, with six OOD datasets that cover a variety of distribution shifts. WDiscOOD demonstrates superior performance on deep classifiers with diverse backbone architectures, including CNN and vision transformer. Furthermore, we also show that WDiscOOD more effectively detects novel concepts in representation spaces trained with contrastive objectives, including supervised contrastive loss and multi-modality contrastive loss.Comment: Accepted by ICCV 2023. Code is available at: https://github.com/ivalab/WDiscOOD.gi

    Face Recognition Methodologies Using Component Analysis: The Contemporary Affirmation of The Recent Literature

    Get PDF
    This paper explored the contemporary affirmation of the recent literature in the context of face recognition systems, a review motivated by contradictory claims in the literature. This paper shows how the relative performance of recent claims based on methodologies such as PCA and ICA, which are depend on the task statement. It then explores the space of each model acclaimed in recent literature. In the process, this paper verifies the results of many of the face recognition models in the literature, and relates them to each other and to this work

    Discriminative Features via Generalized Eigenvectors

    Full text link
    Representing examples in a way that is compatible with the underlying classifier can greatly enhance the performance of a learning system. In this paper we investigate scalable techniques for inducing discriminative features by taking advantage of simple second order structure in the data. We focus on multiclass classification and show that features extracted from the generalized eigenvectors of the class conditional second moments lead to classifiers with excellent empirical performance. Moreover, these features have attractive theoretical properties, such as inducing representations that are invariant to linear transformations of the input. We evaluate classifiers built from these features on three different tasks, obtaining state of the art results

    FAME: Face Association through Model Evolution

    Full text link
    We attack the problem of learning face models for public faces from weakly-labelled images collected from web through querying a name. The data is very noisy even after face detection, with several irrelevant faces corresponding to other people. We propose a novel method, Face Association through Model Evolution (FAME), that is able to prune the data in an iterative way, for the face models associated to a name to evolve. The idea is based on capturing discriminativeness and representativeness of each instance and eliminating the outliers. The final models are used to classify faces on novel datasets with possibly different characteristics. On benchmark datasets, our results are comparable to or better than state-of-the-art studies for the task of face identification.Comment: Draft version of the stud
    corecore