12 research outputs found

    Heterogeneous Techniques used in Face Recognition: A Survey

    Get PDF
    Face Recognition has become one of the important areas of research in computer vision. Human Communication is a combination of both verbal and non-verbal. For interaction in the society, face serve as the primary canvas used to express distinct emotions non-verbally. The face of one person provides the most important natural means of communication. In this paper, we will discuss the various works done in the area of face recognition where focus is on intelligent approaches like PCA, LDA, DFLD, SVD, GA etc. In the current trend, combination of these existing techniques are being taken into consideration and are discussed in this paper.Keywords: Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Genetic Algorithm (GA), Direct Fractional LDA (DFLD

    Evaluation of face recognition algorithms under noise

    Get PDF
    One of the major applications of computer vision and image processing is face recognition, where a computerized algorithm automatically identifies a person’s face from a large image dataset or even from a live video. This thesis addresses facial recognition, a topic that has been widely studied due to its importance in many applications in both civilian and military domains. The application of face recognition systems has expanded from security purposes to social networking sites, managing fraud, and improving user experience. Numerous algorithms have been designed to perform face recognition with good accuracy. This problem is challenging due to the dynamic nature of the human face and the different poses that it can take. Regardless of the algorithm, facial recognition accuracy can be heavily affected by the presence of noise. This thesis presents a comparison of traditional and deep learning face recognition algorithms under the presence of noise. For this purpose, Gaussian and salt-andpepper noises are applied to the face images drawn from the ORL Dataset. The image recognition is performed using each of the following eight algorithms: principal component analysis (PCA), two-dimensional PCA (2D-PCA), linear discriminant analysis (LDA), independent component analysis (ICA), discrete cosine transform (DCT), support vector machine (SVM), convolution neural network (CNN) and Alex Net. The ORL dataset was used in the experiments to calculate the evaluation accuracy for each of the investigated algorithms. Each algorithm is evaluated with two experiments; in the first experiment only one image per person is used for training, whereas in the second experiment, five images per person are used for training. The investigated traditional algorithms are implemented with MATLAB and the deep learning algorithms approaches are implemented with Python. The results show that the best performance was obtained using the DCT algorithm with 92% dominant eigenvalues and 95.25 % accuracy, whereas for deep learning, the best performance was using a CNN with accuracy of 97.95%, which makes it the best choice under noisy conditions

    Biometric face recognition using multilinear projection and artificial intelligence

    Get PDF
    PhD ThesisNumerous problems of automatic facial recognition in the linear and multilinear subspace learning have been addressed; nevertheless, many difficulties remain. This work focuses on two key problems for automatic facial recognition and feature extraction: object representation and high dimensionality. To address these problems, a bidirectional two-dimensional neighborhood preserving projection (B2DNPP) approach for human facial recognition has been developed. Compared with 2DNPP, the proposed method operates on 2-D facial images and performs reductions on the directions of both rows and columns of images. Furthermore, it has the ability to reveal variations between these directions. To further improve the performance of the B2DNPP method, a new B2DNPP based on the curvelet decomposition of human facial images is introduced. The curvelet multi- resolution tool enhances the edges representation and other singularities along curves, and thus improves directional features. In this method, an extreme learning machine (ELM) classifier is used which significantly improves classification rate. The proposed C-B2DNPP method decreases error rate from 5.9% to 3.5%, from 3.7% to 2.0% and from 19.7% to 14.2% using ORL, AR, and FERET databases compared with 2DNPP. Therefore, it achieves decreases in error rate more than 40%, 45%, and 27% respectively with the ORL, AR, and FERET databases. Facial images have particular natural structures in the form of two-, three-, or even higher-order tensors. Therefore, a novel method of supervised and unsupervised multilinear neighborhood preserving projection (MNPP) is proposed for face recognition. This allows the natural representation of multidimensional images 2-D, 3-D or higher-order tensors and extracts useful information directly from tensotial data rather than from matrices or vectors. As opposed to a B2DNPP which derives only two subspaces, in the MNPP method multiple interrelated subspaces are obtained over different tensor directions, so that the subspaces are learned iteratively by unfolding the tensor along the different directions. The performance of the MNPP has performed in terms of the two modes of facial recognition biometrics systems of identification and verification. The proposed supervised MNPP method achieved decrease over 50.8%, 75.6%, and 44.6% in error rate using ORL, AR, and FERET databases respectively, compared with 2DNPP. Therefore, the results demonstrate that the MNPP approach obtains the best overall performance in various learning scenarios

    Swarm intelligence and evolutionary computation approaches for 2D face recognition: a systematic review

    Get PDF
    A wide range of approaches for 2D face recognition (FR) systems can be found in the literature due to its high applicability and issues that need more investigation yet which include occlusion, variations in scale, facial expression, and illumination. Over the last years, a growing number of improved 2D FR systems using Swarm Intelligence and Evolutionary Computing algorithms have emerged. The present work brings an up-to-date Systematic Literature Review (SLR) concerning the use of Swarm Intelligence and Evolutionary Computation applied in 2D FR systems. Also, this review analyses and points out the key techniques and algorithms used and suggests some directions for future research

    FACE CLASSIFICATION FOR AUTHENTICATION APPROACH BY USING WAVELET TRANSFORM AND STATISTICAL FEATURES SELECTION

    Get PDF
    This thesis consists of three parts: face localization, features selection and classification process. Three methods were proposed to locate the face region in the input image. Two of them based on pattern (template) Matching Approach, and the other based on clustering approach. Five datasets of faces namely: YALE database, MIT-CBCL database, Indian database, BioID database and Caltech database were used to evaluate the proposed methods. For the first method, the template image is prepared previously by using a set of faces. Later, the input image is enhanced by applying n-means kernel to decrease the image noise. Then Normalized Correlation (NC) is used to measure the correlation coefficients between the template image and the input image regions. For the second method, instead of using n-means kernel, an optimized metrics are used to measure the difference between the template image and the input image regions. In the last method, the Modified K-Means Algorithm was used to remove the non-face regions in the input image. The above-mentioned three methods showed accuracy of localization between 98% and 100% comparing with the existed methods. In the second part of the thesis, Discrete Wavelet Transform (DWT) utilized to transform the input image into number of wavelet coefficients. Then, the coefficients of weak statistical energy less than certain threshold were removed, and resulted in decreasing the primary wavelet coefficients number up to 98% out of the total coefficients. Later, only 40% statistical features were extracted from the hight energy features by using the variance modified metric. During the experimental (ORL) Dataset was used to test the proposed statistical method. Finally, Cluster-K-Nearest Neighbor (C-K-NN) was proposed to classify the input face based on the training faces images. The results showed a significant improvement of 99.39% in the ORL dataset and 100% in the Face94 dataset classification accuracy. Moreover, a new metrics were introduced to quantify the exactness of classification and some errors of the classification can be corrected. All the above experiments were implemented in MATLAB environment

    Face Recognition and Facial Expression Detection

    Get PDF
    The "Face Recognition System" is a PC based application for distinguishing an individual from an advanced image (.pgm/ .jpeg … .). It's finished by contrasting the chose facial highlights from the picture & a facial database. It's in light of the geometric highlights of a face, which is most likely the most instinctive way to deal with face acknowledgment. One of the initially mechanized face acknowledgment frameworks was marker focuses (position of eyes, ears, nose, button… .) were utilized to assemble a highlight vector (separation between the focuses, edge between them, … ). The acknowledgment was performed by SVD (Singular Value Decomposition) calculation, HMM (Hidden Markov Model) calculation. Such a technique is strong against changes in brightening by its inclination. Programmed Facial Expression Recognition and Investigation, specifically FACS Action Unit (AU) identification and discrete feeling location, has been a dynamic point in PC science for more than two decades. Institutionalization and likeness has come somehow; for occasion, there exist a number of usually utilized outward appearance databases. Be that as it may, need of a typical assessment convention and absence of sufficient subtle elements to repeat the reported individual results make it difficult to contrast frameworks with one another. This thus obstructs the advancement of the field. A periodical test in Facial Expression Acknowledgment and Analysis would permit this correlation in a reasonable way. It would elucidate how far the field has come, and would permit us to recognize new objectives, difficulties and targets. In this paper we introduce the first challenge in programmed acknowledgment of outward appearances to be held amid the IEEE gathering on Face and Gesture Recognition 2011, in Santa Barbara, California. Two sub-difficulties are defined: one on AU recognition also, another on discrete feeling location. It plots the assessment convention, the information utilized, and the after effects of a pattern strategy for the two subs

    FACE CLASSIFICATION FOR AUTHENTICATION APPROACH BY USING WAVELET TRANSFORM AND STATISTICAL FEATURES SELECTION

    Get PDF
    This thesis consists of three parts: face localization, features selection and classification process. Three methods were proposed to locate the face region in the input image. Two of them based on pattern (template) Matching Approach, and the other based on clustering approach. Five datasets of faces namely: YALE database, MIT-CBCL database, Indian database, BioID database and Caltech database were used to evaluate the proposed methods. For the first method, the template image is prepared previously by using a set of faces. Later, the input image is enhanced by applying n-means kernel to decrease the image noise. Then Normalized Correlation (NC) is used to measure the correlation coefficients between the template image and the input image regions. For the second method, instead of using n-means kernel, an optimized metrics are used to measure the difference between the template image and the input image regions. In the last method, the Modified K-Means Algorithm was used to remove the non-face regions in the input image. The above-mentioned three methods showed accuracy of localization between 98% and 100% comparing with the existed methods. In the second part of the thesis, Discrete Wavelet Transform (DWT) utilized to transform the input image into number of wavelet coefficients. Then, the coefficients of weak statistical energy less than certain threshold were removed, and resulted in decreasing the primary wavelet coefficients number up to 98% out of the total coefficients. Later, only 40% statistical features were extracted from the hight energy features by using the variance modified metric. During the experimental (ORL) Dataset was used to test the proposed statistical method. Finally, Cluster-K-Nearest Neighbor (C-K-NN) was proposed to classify the input face based on the training faces images. The results showed a significant improvement of 99.39% in the ORL dataset and 100% in the Face94 dataset classification accuracy. Moreover, a new metrics were introduced to quantify the exactness of classification and some errors of the classification can be corrected. All the above experiments were implemented in MATLAB environment

    Fusion of face and iris biometrics in security verification systems.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Durban, 2016.Abstract available in PDF file

    Data driven analysis of faces from images

    Get PDF
    This thesis proposes three new data-driven approaches to detect, analyze, or modify faces in images. All presented contributions are inspired by the use of prior knowledge and they derive information about facial appearances from pre-collected databases of images or 3D face models. First, we contribute an approach that extends a widely-used monocular face detector by an additional classifier that evaluates disparity maps of a passive stereo camera. The algorithm runs in real-time and significantly reduces the number of false positives compared to the monocular approach. Next, with a many-core implementation of the detector, we train view-dependent face detectors based on tailored views which guarantee that the statistical variability is fully covered. These detectors are superior to the state of the art on a challenging dataset and can be trained in an automated procedure. Finally, we contribute a model describing the relation of facial appearance and makeup. The approach extracts makeup from before/after images of faces and allows to modify faces in images. Applications such as machine-suggested makeup can improve perceived attractiveness as shown in a perceptual study. In summary, the presented methods help improve the outcome of face detection algorithms, ease and automate their training procedures and the modification of faces in images. Moreover, their data-driven nature enables new and powerful applications arising from the use of prior knowledge and statistical analyses.In der vorliegenden Arbeit werden drei neue, datengetriebene Methoden vorgestellt, die Gesichter in Abbildungen detektieren, analysieren oder modifizieren. Alle Algorithmen extrahieren dabei Vorwissen über Gesichter und deren Erscheinungsformen aus zuvor erstellten Gesichts- Datenbanken, in 2-D oder 3-D. Zunächst wird ein weit verbreiteter monokularer Gesichtsdetektions- Algorithmus um einen zweiten Klassifikator erweitert. In Echtzeit wertet dieser stereoskopische Tiefenkarten aus und führt so zu nachweislich weniger falsch detektierten Gesichtern. Anschließend wird der Basis-Algorithmus durch Parallelisierung verbessert und mit synthetisch generierten Bilddaten trainiert. Diese garantieren die volle Nutzung des verfügbaren Varianzspektrums. So erzeugte Detektoren übertreffen bisher präsentierte Detektoren auf einem schwierigen Datensatz und können automatisch erzeugt werden. Abschließend wird ein Datenmodell für Gesichts-Make-up vorgestellt. Dieses extrahiert Make-up aus Vorher/Nachher-Fotos und kann Gesichter in Abbildungen modifizieren. In einer Studie wird gezeigt, dass vom Computer empfohlenes Make-up die wahrgenommene Attraktivität von Gesichtern steigert. Zusammengefasst verbessern die gezeigten Methoden die Ergebnisse von Gesichtsdetektoren, erleichtern und automatisieren ihre Trainingsprozedur sowie die automatische Veränderung von Gesichtern in Abbildungen. Durch Extraktion von Vorwissen und statistische Datenanalyse entstehen zudem neuartige Anwendungsfelder
    corecore