47,738 research outputs found

    Geometric Expression Invariant 3D Face Recognition using Statistical Discriminant Models

    No full text
    Currently there is no complete face recognition system that is invariant to all facial expressions. Although humans find it easy to identify and recognise faces regardless of changes in illumination, pose and expression, producing a computer system with a similar capability has proved to be particularly di cult. Three dimensional face models are geometric in nature and therefore have the advantage of being invariant to head pose and lighting. However they are still susceptible to facial expressions. This can be seen in the decrease in the recognition results using principal component analysis when expressions are added to a data set. In order to achieve expression-invariant face recognition systems, we have employed a tensor algebra framework to represent 3D face data with facial expressions in a parsimonious space. Face variation factors are organised in particular subject and facial expression modes. We manipulate this using single value decomposition on sub-tensors representing one variation mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained environments and still preserves the integrity of the 3D data. The results show improved recognition rates for faces and facial expressions, even recognising high intensity expressions that are not in the training datasets. We have determined, experimentally, a set of anatomical landmarks that best describe facial expression e ectively. We found that the best placement of landmarks to distinguish di erent facial expressions are in areas around the prominent features, such as the cheeks and eyebrows. Recognition results using landmark-based face recognition could be improved with better placement. We looked into the possibility of achieving expression-invariant face recognition by reconstructing and manipulating realistic facial expressions. We proposed a tensor-based statistical discriminant analysis method to reconstruct facial expressions and in particular to neutralise facial expressions. The results of the synthesised facial expressions are visually more realistic than facial expressions generated using conventional active shape modelling (ASM). We then used reconstructed neutral faces in the sub-tensor framework for recognition purposes. The recognition results showed slight improvement. Besides biometric recognition, this novel tensor-based synthesis approach could be used in computer games and real-time animation applications

    Conversion of 2D Image into 3D and Face Recognition Based Attendance System

    Get PDF
    ABSTRACT: The world is being 3d and the 3d imaging having more advantages over 2d,presently there are attendance system based on fingerprint recognition but that are having some advantages like the surface should be dust free and also we have the contact with the sensor so that finger should have to keep properly. There are also 2D face recognition based attendance system. To overcome the problems in 2d i.e. Lighting variations, Expression variations, variations 3D face recognition is used. We propose a system that takes the attendance of the student using Conversion of 2d images into 3d And then face face detection and recognition . We are using two 2d images taken from two different camera and that is converted into 3d image using binocular disparity technique. After this image is use for the face recognition. Threedimensional face recognition (3D face recognition) is a modality of facial recognition methods in which the threedimensional geometry of the human face is used. It can also identify a face from a range of viewing angles, including a profile view. Three-dimensional data points from a face vastly improve the precision of facial recognition

    Robust Face Recognition via Multimodal Deep Face Representation

    Full text link
    © 2015 IEEE. Face images appearing in multimedia applications, e.g., social networks and digital entertainment, usually exhibit dramatic pose, illumination, and expression variations, resulting in considerable performance degradation for traditional face recognition algorithms. This paper proposes a comprehensive deep learning framework to jointly learn face representation using multimodal information. The proposed deep learning structure is composed of a set of elaborately designed convolutional neural networks (CNNs) and a three-layer stacked auto-encoder (SAE). The set of CNNs extracts complementary facial features from multimodal data. Then, the extracted features are concatenated to form a high-dimensional feature vector, whose dimension is compressed by SAE. All of the CNNs are trained using a subset of 9,000 subjects from the publicly available CASIA-WebFace database, which ensures the reproducibility of this work. Using the proposed single CNN architecture and limited training data, 98.43% verification rate is achieved on the LFW database. Benefitting from the complementary information contained in multimodal data, our small ensemble system achieves higher than 99.0% recognition rate on LFW using publicly available training set

    Facial Expressions Reconstruction of 3D Faces based on Real Human Data

    Get PDF
    This paper presents an approach to reconstruct facial expressions using real data sets of people acquired by three-dimensional (3D) scanners. The acquired raw human face surfaces are pre-processed and a statistical shape model of the human face is built using multivariate statistical approaches. Our idea of using tensor model on the multivariate statistical method is to use all the face features found in the training set, with a variety of facial variations simultaneously by separating them into a number of classes. Point-to-point correspondences between the face surfaces are required in order to do the reconstruction processes. The advantage with the tensor-based multivariate statistical method is that it is practical to generate a variety of face shapes applied in different degrees, which would give a continuous and natural transition between the facial expressions. Our experiments focused on dense correspondence to compute the deformation of facial expressions. We have also used some selected landmark points placed on the face surfaces to compute the deformation of facial expressions. The selected landmark points are based on the Facial Action Coding System (FACS) framework and the movements are analysed according to the motion of the facial features. Besides altering human facial expressions, the presented approach could also be used to neutralise facial expression to aid the performance of face recognition

    Automated Recognition of Facial Affect Using Deep Neural Networks

    Get PDF
    Automated Facial Expression Recognition (FER) has been a topic of study in the field of computer vision and machine learning for decades. In spite of efforts made to improve the accuracy of FER systems, existing methods still are not generalizable and accurate enough for use in real-world applications. Many of the traditional methods use hand-crafted (a.k.a. engineered) features for representation of facial images. However, these methods often require rigorous hyper-parameter tuning to achieve favorable results. Recently, Deep Neural Networks (DNNs) have shown to outperform traditional methods in visual object recognition. DNNs require huge data as well as powerful computing units for training generalizable and robust classification models. The problem of automated FER especially with images captured in the wild setting is even more challenging since there are subtle differences between various facial emotions. This dissertation presents the recent efforts I made in 1) creating a large annotated database of facial expressions, 2) developing novel DNN-based methods for automated recognition of facial expressions described by two main models of affect, the categorical model and the dimensional model, and 3) developing a robust face detection and emotion recognition system based on our state-of-the-art DNN and trained on our proposed database of facial expressions. Existing annotated databases of facial expressions in the wild are small and mostly cover discrete emotions (aka the categorical model). There are very limited annotated facial databases for affective computing in the continuous dimensional model (e.g., valence and arousal). To address these needs, we developed the largest database of human affect (called AffectNet). For AffectNet, we collected, annotated, and prepared for public distribution a new database of facial emotions in the wild. AffectNet contains more than 1,000,000 facial images from the Internet by querying three major search engines using 1250 emotion related keywords in six different languages. About half of the retrieved images were manually annotated for the presence of seven discrete facial expressions and the intensity of valence and arousal. AffectNet is by far the largest database of facial expression, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models. This dissertation also presents three major and novel DNN-based methods for automated facial affect estimation. The methods are: 1) 3D Inception-ResNet (3DIR), 2) BReGNet, and 3) BReG-NeXt architectures. These methods modify the residual unit -proposed in the original ResNets- with different operations. Comprehensive experiments are conducted to evaluate the performance of each of the proposed methods as well as their efficiency using Affect and few other facial expression databases. Our final proposed method -BReG-NeXt- achieves state-of-the-art results in predicting both dimensional and categorical models of affect with significantly fewer training parameters and less number of FLOPs. Additionally, a robust face detection network is developed based on the BReG-NeXt architecture which leverages AffectNet’s diverse training data and BReG-NeXt’s efficient feature extraction powers

    Facial Expression Recognition

    Get PDF

    Facial Asymmetry Analysis Based on 3-D Dynamic Scans

    Get PDF
    Facial dysfunction is a fundamental symptom which often relates to many neurological illnesses, such as stroke, Bell’s palsy, Parkinson’s disease, etc. The current methods for detecting and assessing facial dysfunctions mainly rely on the trained practitioners which have significant limitations as they are often subjective. This paper presents a computer-based methodology of facial asymmetry analysis which aims for automatically detecting facial dysfunctions. The method is based on dynamic 3-D scans of human faces. The preliminary evaluation results testing on facial sequences from Hi4D-ADSIP database suggest that the proposed method is able to assist in the quantification and diagnosis of facial dysfunctions for neurological patients
    • …
    corecore