872,632 research outputs found

    Accuracy of generic mesh conformation: the future of facial morphological analysis

    Get PDF
    Three-dimensional (3D) analysis of the face is required for the assessment of changes following surgery, to monitor the progress of pathological conditions and for the evaluation of facial growth. Sophisticated methods have been applied for the evaluation of facial morphology, the most common being dense surface correspondence. The method depends on the application of a mathematical facial mask known as the generic facial mesh for the evaluation of the characteristics of facial morphology. This study evaluated the accuracy of the conformation of generic mesh to the underlying facial morphology. The study was conducted on 10 non-patient volunteers. Thirty-four 2-mm-diameter self-adhesive, non-reflective markers were placed on each face. These were readily identifiable on the captured 3D facial image, which was captured by Di3D stereophotogrammetry. The markers helped in minimising digitisation errors during the conformation process. For each case, the face was captured six times: at rest and at the maximum movements of four facial expressions. The 3D facial image of each facial expression was analysed. Euclidean distances between the 19 corresponding landmarks on the conformed mesh and on the original 3D facial model provided a measure of the accuracy of the conformation process. For all facial expressions and all corresponding landmarks, these distances were between 0.7 and 1.7 mm. The absolute mean distances ranged from 0.73 to 1.74 mm. The mean absolute error of the conformation process was 1.13 ± 0.26 mm. The conformation of the generic facial mesh is accurate enough for clinical trial proved to be accurate enough for the analysis of the captured 3D facial images

    Simultaneous Facial Landmark Detection, Pose and Deformation Estimation under Facial Occlusion

    Full text link
    Facial landmark detection, head pose estimation, and facial deformation analysis are typical facial behavior analysis tasks in computer vision. The existing methods usually perform each task independently and sequentially, ignoring their interactions. To tackle this problem, we propose a unified framework for simultaneous facial landmark detection, head pose estimation, and facial deformation analysis, and the proposed model is robust to facial occlusion. Following a cascade procedure augmented with model-based head pose estimation, we iteratively update the facial landmark locations, facial occlusion, head pose and facial de- formation until convergence. The experimental results on benchmark databases demonstrate the effectiveness of the proposed method for simultaneous facial landmark detection, head pose and facial deformation estimation, even if the images are under facial occlusion.Comment: International Conference on Computer Vision and Pattern Recognition, 201

    Perception of global facial geometry is modulated through experience

    Get PDF
    Identification of personally familiar faces is highly efficient across various viewing conditions. While the presence of robust facial representations stored in memory is considered to aid this process, the mechanisms underlying invariant identification remain unclear. Two experiments tested the hypothesis that facial representations stored in memory are associated with differential perceptual processing of the overall facial geometry. Subjects who were personally familiar or unfamiliar with the identities presented discriminated between stimuli whose overall facial geometry had been manipulated to maintain or alter the original facial configuration (see Barton, Zhao & Keenan, 2003). The results demonstrate that familiarity gives rise to more efficient processing of global facial geometry, and are interpreted in terms of increased holistic processing of facial information that is maintained across viewing distances

    A Survey of the Trends in Facial and Expression Recognition Databases and Methods

    Full text link
    Automated facial identification and facial expression recognition have been topics of active research over the past few decades. Facial and expression recognition find applications in human-computer interfaces, subject tracking, real-time security surveillance systems and social networking. Several holistic and geometric methods have been developed to identify faces and expressions using public and local facial image databases. In this work we present the evolution in facial image data sets and the methodologies for facial identification and recognition of expressions such as anger, sadness, happiness, disgust, fear and surprise. We observe that most of the earlier methods for facial and expression recognition aimed at improving the recognition rates for facial feature-based methods using static images. However, the recent methodologies have shifted focus towards robust implementation of facial/expression recognition from large image databases that vary with space (gathered from the internet) and time (video recordings). The evolution trends in databases and methodologies for facial and expression recognition can be useful for assessing the next-generation topics that may have applications in security systems or personal identification systems that involve "Quantitative face" assessments.Comment: 16 pages, 4 figures, 3 tables, International Journal of Computer Science and Engineering Survey, October, 201

    A comparison of facial expression properties in five hylobatid species

    Get PDF
    Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different hylobatid species belonging to three different genera using a recently developed objective coding system, the Facial Action Coding System for hylobatid species (GibbonFACS). We described three important properties of their facial expressions and compared them between genera. First, we compared the rate of facial expressions, which was defined as the number of facial expressions per units of time. Second, we compared their repertoire size, defined as the number of different types of facial expressions used, independent of their frequency. Third, we compared the diversity of expression, defined as the repertoire weighted by the rate of use for each type of facial expression. We observed a higher rate and diversity of facial expression, but no larger repertoire, in Symphalangus (siamangs) compared to Hylobates and Nomascus species. In line with previous research, these results suggest siamangs differ from other hylobatids in certain aspects of their social behavior. To investigate whether differences in facial expressions are linked to hylobatid socio-ecology, we used a Phylogenetic General Least Square (PGLS) regression analysis to correlate those properties with two social factors: group-size and level of monogamy. No relationship between the properties of facial expressions and these socio-ecological factors was found. One explanation could be that facial expressions in hylobatid species are subject to phylogenetic inertia and do not differ sufficiently between species to reveal correlations with factors such as group size and monogamy level. Am. J. Primatol. 76:618-628, 2014

    Automatic facial expression tracking for 4D range scans

    Get PDF
    This paper presents a fully automatic approach of spatio-temporal facial expression tracking for 4D range scans without any manual interventions (such as specifying landmarks). The approach consists of three steps: rigid registration, facial model reconstruction, and facial expression tracking. A Scaling Iterative Closest Points (SICP) algorithm is introduced to compute the optimal rigid registration between a template facial model and a range scan with consideration of the scale problem. A deformable model, physically based on thin shells, is proposed to faithfully reconstruct the facial surface and texture from that range data. And then the reconstructed facial model is used to track facial expressions presented in a sequence of range scans by the deformable model

    A Sex Difference in Facial Contrast and its Exaggeration by Cosmetics

    Full text link
    This study demonstrates the existence of a sex difference in facial contrast. By measuring carefully controlled photographic images, female faces were shown to have greater luminance contrast between the eyes, lips, and the surrounding skin than did male faces. This sex difference in facial contrast was found to influence the perception of facial gender. An androgynous face can be made to appear female by increasing the facial contrast, or to appear male by decreasing the facial contrast. Application of cosmetics was found to consistently increase facial contrast. Female faces wearing cosmetics had greater facial contrast than the same faces not wearing cosmetics. Female facial beauty is known to be closely linked to sex differences, with femininity considered attractive. These results suggest that cosmetics may function in part by exaggerating a sexually dimorphic attribute - facial contrast - to make the face appear more feminine and hence attractive

    Automatic facial analysis for objective assessment of facial paralysis

    Get PDF
    Facial Paralysis is a condition causing decreased movement on one side of the face. A quantitative, objective and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents an approach based on the automatic analysis of patient video data. Facial feature localization and facial movement detection methods are discussed. An algorithm is presented to process the optical flow data to obtain the motion features in the relevant facial regions. Three classification methods are applied to provide quantitative evaluations of regional facial nerve function and the overall facial nerve function based on the House-Brackmann Scale. Experiments show the Radial Basis Function (RBF) Neural Network to have superior performance
    corecore