16 research outputs found

    The effect of position sources on estimated eigenvalues in intensity modeled data

    Get PDF
    In biometrics, often models are used in which the data distributions are approximated with normal distributions. In particular, the eigenface method models facial data as a mixture of fixed-position intensity signals with a normal distribution. The model parameters, a mean value and a covariance matrix, need to be estimated from a training set. Scree plots showing the eigenvalues of the estimated covariance matrices have two very typical characteristics when facial data is used: firstly, most of the curve can be approximated by a straight line on a double logarithmic plot, and secondly, if the number of samples used for the estimation is smaller than the dimensionality of these samples, using more samples for the estimation results in more intensity sources being estimated and a larger part of the scree plot curve is accurately modeled by a straight line.\ud One explanation for this behaviour is that the fixed-position intensity model is an inaccurate model of facial data. This is further supported by previous experiments in which synthetic data with the same second order statistics as facial data gives a much higher performance of biometric systems. We hypothesize that some of the sources in face data are better modeled as position sources, and therefore the fixed-position intensity sources model should be extended with position sources. Examples of features in the face which might change position between either images of different people or images of the same person are the eyes, the pupils within the eyes and the corners of the mouth.\ud We show experimentally that when data containing a limit number of position sources is used in a system based on the fixed-position intensity sources model, the resulting scree plots have similar characteristics as the scree plots of facial data, thus supporting our claim that facial data at least contains sources inaccurately modeled by the fixed position intensity sources model, and position sources might provide a better model for these sources.\u

    Automatic face alignment by maximizing similarity score

    Get PDF
    Accurate face registration is of vital importance to the performance of a face recognition algorithm. We propose a face registration method which searches for the optimal alignment by maximizing the score of a face recognition algorithm. In this paper we investigate the practical usability of our face registration method. Experiments show that our registration method achieves better results in face verification than the landmark based registration method. We even obtain face verification results which are similar to results obtained using landmark based registration with manually located eyes, nose and mouth as landmarks. The performance of the method is tested on the FRGCv1 database using images taken under both controlled and uncontrolled conditions

    Facial recognition using new LBP representations

    Get PDF
    In this paper, we propose a facial recognition based on the LBP operator. We divide the face into non-overlapped regions. After that, we classify a training set using each region at a time under different configurations of the LBP operator. Regarding to the best recognition rate, we consider a weight and specific LBP configuration to the regions. To represent the face image, we extract LBP histograms with the specific configuration (radius and neighbors) and concatenate them into feature histogram. We propose a multi-resolution approach, to gather local and global information and improve the recognition rate. To evaluate our proposed approach, we considered the FERET data set, which includes different facial expressions, lighting, and aging of the subjects. In addition, weighted Chi-2 is considered as a dissimilarity measure. The experimental results show a considerable improvement against the original idea

    Fast and Accurate 3D Face Recognition Using Registration to an Intrinsic Coordinate System and Fusion of Multiple Region classifiers

    Get PDF
    In this paper we present a new robust approach for 3D face registration to an intrinsic coordinate system of the face. The intrinsic coordinate system is defined by the vertical symmetry plane through the nose, the tip of the nose and the slope of the bridge of the nose. In addition, we propose a 3D face classifier based on the fusion of many dependent region classifiers for overlapping face regions. The region classifiers use PCA-LDA for feature extraction and the likelihood ratio as a matching score. Fusion is realised using straightforward majority voting for the identification scenario. For verification, a voting approach is used as well and the decision is defined by comparing the number of votes to a threshold. Using the proposed registration method combined with a classifier consisting of 60 fused region classifiers we obtain a 99.0% identification rate on the all vs first identification test of the FRGC v2 data. A verification rate of 94.6% at FAR=0.1% was obtained for the all vs all verification test on the FRGC v2 data using fusion of 120 region classifiers. The first is the highest reported performance and the second is in the top-5 of best performing systems on these tests. In addition, our approach is much faster than other methods, taking only 2.5 seconds per image for registration and less than 0.1 ms per comparison. Because we apply feature extraction using PCA and LDA, the resulting template size is also very small: 6 kB for 60 region classifiers

    Face Recognition Technique for Attendance Management System

    Get PDF
    Attendance is crucial to track the presence of students especially in Universiti Teknologi PETRONAS because they need to achieve the minimum percentage of attendance to sit for a final examination. The lecturer needs to call the name of the students to ensure they are attending the class but it will consume longer time to finish the job. So, the lecturer will pass the attendance sheet to the students so that they will mark the attendance on their own. The problem arises when the students will try to cheat by asking their friends to put the signature on behalf of themselves. Thus, a reliable attendance management system is needed to solve this problem. The aim of this project is to develop a system that is able to mark the attendance for the students by recognizing the face. There are four main stages to develop this system. First, the userā€™s face need to be detected by the system

    Image conditions for machine-based face recognition of juvenile faces

    Get PDF
    Machine-based facial recognition could help law enforcement and other organisations to match juvenile faces more efficiently. It is especially important when dealing with indecent images of children to minimise the workload, and deal with moral and stamina challenges related to human recognition. With growth related changes, juvenile face recognition is challenging. The challenge not only relates to the growth of the childā€™s face, but also to face recognition in the wild with unconstrained images. The aim of the study was to evaluate how different conditions (i.e. black and white, cropped, blur and resolution reduction) can affect machine-based facial recognition of juvenile age progression. The study used three off-the-shelf facial recognition algorithms (Microsoft Face API, Amazon Rekognition, and Face++) and compared the original images and the age progression images under the four image conditions against an older image of the child. The results showed a decrease in facial similarity with an increased age gap, in comparison to Microsoft; Amazon and Face++ showed higher confidence scores and are more resilient to a change in image condition. The image condition ā€˜black and whiteā€™ and ā€˜croppedā€™ had a negative effect across all three softwares. The relationship between age progression images and the younger original image was explored. The results suggest manual age progression images are no more useful than the original image for facial identification of missing children, and Amazon and Face++ performed better with the original image

    A guided manual method for juvenile age progression using digital images

    Get PDF
    Predicting the possible age-related changes to a childā€™s face, age progression methods modify the shape, colour and texture of a facial image while retaining the identity of the individual. However, the techniques vary between different practitioners. This study combines different age progression techniques for juvenile subjects, various researches based on longitudinal radiographic data; physical anthropometric measurements of the head and face; and digital image measurements in pixels. Utilising 12 anthropometric measurements of the face, this study documents a new workflow for digital manual age progression. An inter-observer error study (n = 5) included the comparison of two age progressions of the same individual at different ages. The proposed age progression method recorded satisfactory levels of repeatability based on the 12 anthropometric measurements. Seven measurements achieved an error below 8.60%. Facial anthropometric measurements involving the nasion (n) and trichion (tr) showed the most inconsistency (14ā€“34% difference between the practitioners). Overall, the horizontal measurements were more accurate than the vertical measurements. The age progression images were compared using a manual morphological method and machine-based face recognition. The confidence scores generated by the three different facial recognition APIs suggested the performance of any age progression not only varies between practitioners, but also between the Facial recognition systems. The suggested new workflow was able to guide the positioning of the facial features, but the process of age progression remains dependant on artistic interpretation

    DFT domain Feature Extraction using Edge-based Scale Normalization for Enhanced Face Recognition

    Full text link
    corecore