95 research outputs found
Offline Face Recognition System Based on GaborFisher Descriptors and Hidden Markov Models
This paper presents a new offline face recognition system. The proposed system is built on one dimensional left-to- right Hidden Markov Models (1D-HMMs). Facial image features are extracted using Gabor wavelets. The dimensionality of these features is reduced using the Fisher’s Discriminant Analysis method to keep only the most relevant information. Unlike existing techniques using 1D-HMMs, in classification step, the proposed system employs 1D-HMMs to find the relationship between reduced features components directly without any additional segmentation step of interest regions in the face image. The performance evaluation of the proposed method was performed with AR database and the proposed method showed a high recognition rate for this database
Recommended from our members
3D face recognition based on machine learning
3D facial data has a great potential for overcoming the problems of illumination and pose variation in face recognition. In this paper, we present a 3D facial system based on the machine learning. We used landmarks for feature extraction and Cascade Correlation neural network to make the final decision. Experiments are presented using 3D face images from the Face Recognition Grand Challenge database version 2.0. For CCNN using Jack-knife evaluation, an accuracy of 100% has been achieved for 7 faces with different expression, with 100% for both of specificity and sensitivity
3D Face Recognition Benchmarks on the Bosphorus Database with Focus on Facial Expressions
This paper presents an evaluation of several 3D face recognizers on the Bosphorus database, which was gathered for studies on expression and pose invariant face analysis. We provide identification results of three 3D face recognition algorithms, namely generic face template based ICP approach, one-to-all ICP approach, and depth image-based Principal Component Analysis (PCA) method. All of these techniques treat faces globally and are usually accepted as baseline approaches. In addition, 2D texture classifiers are also incorporated in a fusion setting. Experimental results reveal that even though global shape classifiers achieve almost perfect identification in neutral-to-neutral comparisons, they are sub-optimal under extreme expression variations. We show that it is possible to boost the identification accuracy by focusing on the rigid facial regions and by fusing complementary information coming from shape and texture modalities
Biological landmark Vs quasi-landmarks for 3D face recognition and gender classification
Face recognition and gender classification are vital topics in the field of computer graphic and pattern recognition. We utilized ideas from two growing ideas in computer vision, which are biological landmarks and quasi-landmarks (dense mesh) to propose a novel approach to compare their performance in face recognition and gender classification. The experimental work is conducted on FRRGv2 dataset and acquired 98% and 94% face recognition accuracies using the quasi and biological landmarks respectively. The gender classification accuracies are 92% for quasi-landmarks and 90% for biological landmarks
Learning from Millions of 3D Scans for Large-scale 3D Face Recognition
Deep networks trained on millions of facial images are believed to be closely
approaching human-level performance in face recognition. However, open world
face recognition still remains a challenge. Although, 3D face recognition has
an inherent edge over its 2D counterpart, it has not benefited from the recent
developments in deep learning due to the unavailability of large training as
well as large test datasets. Recognition accuracies have already saturated on
existing 3D face datasets due to their small gallery sizes. Unlike 2D
photographs, 3D facial scans cannot be sourced from the web causing a
bottleneck in the development of deep 3D face recognition networks and
datasets. In this backdrop, we propose a method for generating a large corpus
of labeled 3D face identities and their multiple instances for training and a
protocol for merging the most challenging existing 3D datasets for testing. We
also propose the first deep CNN model designed specifically for 3D face
recognition and trained on 3.1 Million 3D facial scans of 100K identities. Our
test dataset comprises 1,853 identities with a single 3D scan in the gallery
and another 31K scans as probes, which is several orders of magnitude larger
than existing ones. Without fine tuning on this dataset, our network already
outperforms state of the art face recognition by over 10%. We fine tune our
network on the gallery set to perform end-to-end large scale 3D face
recognition which further improves accuracy. Finally, we show the efficacy of
our method for the open world face recognition problem.Comment: 11 page
- …