831 research outputs found

    Face Recognition in Color Using Complex and Hypercomplex Representation

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-540-72847-4_29Color has plenty of discriminative information that can be used to improve the performance of face recognition algorithms, although it is difficult to use it because of its high variability. In this paper we investigate the use of the quaternion representation of a color image for face recognition. We also propose a new representation for color images based on complex numbers. These two color representation methods are compared with the traditional grayscale and RGB representations using an eigenfaces based algorithm for identity verification. The experimental results show that the proposed method gives a very significant improvement when compared to using only the illuminance information.Work supported by the Spanish Project DPI2004-08279-C02-02 and the Generalitat Valenciana - Consellería d’Empresa, Universitat i Ciència under an FPI scholarship.Villegas, M.; Paredes Palacios, R. (2007). Face Recognition in Color Using Complex and Hypercomplex Representation. En Pattern Recognition and Image Analysis: Third Iberian Conference, IbPRIA 2007, Girona, Spain, June 6-8, 2007, Proceedings, Part I. Springer Verlag (Germany). 217-224. https://doi.org/10.1007/978-3-540-72847-4_29S217224Yip, A., Sinha, P.: Contribution of color to face recognition. Perception 31(5), 995–1003 (2002)Torres, L., Reutter, J.Y., Lorente, L.: The importance of the color information in face recognition. In: ICIP, vol. 3, pp. 627–631 (1999)Jones III, C., Abbott, A.L.: Color face recognition by hypercomplex gabor analysis. In: FG2006, University of Southampton, UK, pp. 126–131 (2006)Hamilton, W.R.: On a new species of imaginary quantities connected with a theory of quaternions. In: Proc. Royal Irish Academy, vol. 2, pp. 424–434 (1844)Zhang, F.: Quaternions and matrices of quaternions. Linear Algebra And Its Applications 251(1-3), 21–57 (1997)Pei, S., Cheng, C.: A novel block truncation coding of color images by using quaternion-moment preserving principle. In: ISCAS, Atlanta, USA, vol. 2, pp. 684–687 (1996)Sangwine, S., Ell, T.: Hypercomplex fourier transforms of color images. In: ICIP, Thessaloniki, Greece, vol. 1, pp. 137–140 (2001)Bihan, N.L., Sangwine, S.J.: Quaternion principal component analysis of color images. In: ICIP, Barcelona, Spain, vol. 1, pp. 809–812 (2003)Chang, J.-H., Pei, S.-C., Ding, J.J.: 2d quaternion fourier spectral analysis and its applications. In: ISCAS, Vancouver, Canada, vol. 3, pp. 241–244 (2004)Li, S.Z., Jain, A.K.: 6. In: Handbook of Face Recognition. Springer (2005)Gross, R., Brajovic, V.: An image preprocessing algorithm for illumination invariant face recognition. In: Kittler, J., Nixon, M.S. (eds.) AVBPA 2003. LNCS, vol. 2688, p. 1055. Springer, Heidelberg (2003)Lee, K., Ho, J., Kriegman, D.: Nine points of light: Acquiring subspaces for face recognition under variable lighting. In: CVPR, vol. 1, pp. 519–526 (2001)Zhang, L., Samaras, D.: Face recognition under variable lighting using harmonic image exemplars. In: CVPR, vol. 1, pp. 19–25 (2003)Villegas, M., Paredes, R.: Comparison of illumination normalization methods for face recognition. In: COST 275, University of Hertfordshire, UK, pp. 27–30 (2005)Turk, M., Pentland, A.: Face recognition using eigenfaces. In: CVPR, Hawaii, pp. 586–591 (1991)Bihan, N.L., Mars, J.: Subspace method for vector-sensor wave separation based on quaternion algebra. In: EUSIPCO, Toulouse, France (2002)XM2VTS (CDS00{1,6}), http://www.ee.surrey.ac.uk/Reseach/VSSP/xm2vtsdbLuettin, J., Maître, G.: Evaluation protocol for the extended M2VTS database (XM2VTSDB). IDIAP-COM 05, IDIAP (1998

    Active illumination and appearance model for face alignment

    Get PDF

    Automatic Fitting of a Deformable Face Mask Using a Single Image

    Get PDF
    We propose an automatic method for person-independent fitting of a deformable 3D face mask model under varying illumination conditions. Principle Component Analysis is utilised to build a face model which is then used within a particle filter based approach to fit the mask to the image. By subdividing a coarse mask and using a novel texture mapping technique, we further apply the 3D face model to fit into lower resolution images. The illumination invariance is achieved by representing each face as a combination of harmonic images within the weighting function of the particle filter. We demonstrate the performance of our approach on the IMM Face Database and the Extended Yale Face Database and show that it out performs the Active Shape Models approach

    Preprocessing Technique for Face Recognition Applications under Varying illumination Conditions

    Get PDF
    In the last years, face recognition has become a popular area of research in computer vision, it is typically used in network security systems and access control systems but it is also useful in other multimedia information processing areas. Performance of the face verification system depends on many conditions. One of the most problematic is varying illumination condition. In this paper, we discuss the preprocessing method to solve one of the common problems in face images, due to a real capture system i.e. lighting variations. The different stages include gamma correction, Difference of Gaussian (DOG) filtering and contrast equalization. Gamma correction enhances the local dynamic range of the image in dark or shadowed regions while compressing it in bright regions and is determined by the value of 3B3;. DOG filtering is a grey scale image enhancement algorithm that eliminates the shadowing effects. Contrast equalization rescales the image intensities to standardize a robust measure of overall intensity variations. The technique has been applied to Yale-B data sets, Face Recognition Grand Challenge (FRGC) version 2 Experiment 4 and a real time created data set

    Illumination tolerance in facial recognition

    Get PDF
    In this research work, five different preprocessing techniques were experimented with two different classifiers to find the best match for preprocessor + classifier combination to built an illumination tolerant face recognition system. Hence, a face recognition system is proposed based on illumination normalization techniques and linear subspace model using two distance metrics on three challenging, yet interesting databases. The databases are CAS PEAL database, the Extended Yale B database, and the AT&T database. The research takes the form of experimentation and analysis in which five illumination normalization techniques were compared and analyzed using two different distance metrics. The performances and execution times of the various techniques were recorded and measured for accuracy and efficiency. The illumination normalization techniques were Gamma Intensity Correction (GIC), discrete Cosine Transform (DCT), Histogram Remapping using Normal distribution (HRN), Histogram Remapping using Log-normal distribution (HRL), and Anisotropic Smoothing technique (AS). The linear subspace models utilized were principal component analysis (PCA) and Linear Discriminant Analysis (LDA). The two distance metrics were Euclidean and Cosine distance. The result showed that for databases with both illumination (shadows), and lighting (over-exposure) variations like the CAS PEAL database the Histogram remapping technique with normal distribution produced excellent result when the cosine distance is used as the classifier. The result indicated 65% recognition rate in 15.8 ms/img. Alternatively for databases consisting of pure illumination variation, like the extended Yale B database, the Gamma Intensity Correction (GIC) merged with the Euclidean distance metric gave the most accurate result with 95.4% recognition accuracy in 1ms/img. It was further gathered from the set of experiments that the cosine distance produces more accurate result compared to the Euclidean distance metric. However the Euclidean distance is faster than the cosine distance in all the experiments conducted

    Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling

    Get PDF
    We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and Douglas-Rachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimisation problem is not a convex program, we often find a local minimum using such variational methods. Some local optimality conditions are derived for two different settings, providing preliminary theoretical support for the well-posedness of the learning problem under appropriate conditions.Comment: 29 pages, 13 figures, accepted to be published in TS

    Face Identification and Clustering

    Full text link
    In this thesis, we study two problems based on clustering algorithms. In the first problem, we study the role of visual attributes using an agglomerative clustering algorithm to whittle down the search area where the number of classes is high to improve the performance of clustering. We observe that as we add more attributes, the clustering performance increases overall. In the second problem, we study the role of clustering in aggregating templates in a 1:N open set protocol using multi-shot video as a probe. We observe that by increasing the number of clusters, the performance increases with respect to the baseline and reaches a peak, after which increasing the number of clusters causes the performance to degrade. Experiments are conducted using recently introduced unconstrained IARPA Janus IJB-A, CS2, and CS3 face recognition datasets
    • …
    corecore