23 research outputs found

    DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling

    Get PDF
    Face modeling has been paid much attention in the field of visual computing. There exist many scenarios, including cartoon characters, avatars for social media, 3D face caricatures as well as face-related art and design, where low-cost interactive face modeling is a popular approach especially among amateur users. In this paper, we propose a deep learning based sketching system for 3D face and caricature modeling. This system has a labor-efficient sketching interface, that allows the user to draw freehand imprecise yet expressive 2D lines representing the contours of facial features. A novel CNN based deep regression network is designed for inferring 3D face models from 2D sketches. Our network fuses both CNN and shape based features of the input sketch, and has two independent branches of fully connected layers generating independent subsets of coefficients for a bilinear face representation. Our system also supports gesture based interactions for users to further manipulate initial face models. Both user studies and numerical results indicate that our sketching system can help users create face models quickly and effectively. A significantly expanded face database with diverse identities, expressions and levels of exaggeration is constructed to promote further research and evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201

    Digital manipulation of faces and its consequence upon identification and attractiveness

    Get PDF
    The thesis investigated perception of facial identity and attractiveness with digital manipulation of facial traits. A presentation time paradigm was developed by which stimuli could be presented for a range of brief display periods. Using this paradigm subjects recognised photo-realistic target faces caricatured in shape with greater accuracy than veridical images consistent with previous findings using reaction time as a measure. Subjects were further required to identify colour representations of famous faces which were either veridical, caricatured in colour space or had enhanced colour saturation and intensity contrast (as contrast controls). Recognition accuracy was greater when viewing the colour-caricatured stimuli than either the veridical images or the contrast controls. The removal of colour to produce grey-scale images also decreased accuracy of face recognition indicating that colour information aids facial identification. Caricaturing of faces, therefore, can be extended to the colour domain and, as with shape caricaturing, enhancement of distinctive information can produce a recognition advantage for famous faces. Subjects were also asked to identify the best-likeness for individuals using photo-realistic stimuli and an interactive paradigm with shape-caricature, colour-caricature and contrast-control varied by the user in real-time. The best-likeness with shape manipulation was a slight anti-caricature while with colour-caricature and contrast-control images a mildly exaggerated image was selected as the best-likeness. Thus although images caricatured substantially in colour or shape (+40%) induce superior recognition compared to veridical images such substantial exaggerations are not seen as best-likenesses under prolonged exposure. The gender typicality for white British and Japanese composite faces was manipulated and subjects presented with the images using both a static forced-choice paradigm and an interactive paradigm. Male and female face shapes with enhanced feminine features were consistently found to be more attractive than average. Preference for enhanced femininity for female faces was greater for subjects making within- than cross-cultural judgements

    Towards the Development of Training Tools for Face Recognition

    Get PDF
    Distinctiveness plays an important role in the recognition of faces, i.e., a distinctive face is usually easier to remember than a typical face in a recognition task. This distinctiveness effect explains why caricatures are recognized faster and more accurately than unexaggerated (i.e., veridical) faces. Furthermore, using caricatures during training can facilitate recognition of a person’s face at a later time. The objective of this thesis is to determine the extent to which photorealistic computer-generated caricatures may be used in training tools to improve recognition of faces by humans. To pursue this objective, we developed a caricaturization procedure for three-dimensional (3D) face models, and characterized face recognition performance (by humans) through a series of perceptual studies. The first study focused on 3D shape information without texture. Namely, we tested whether exposure to caricatures during an initial familiarization phase would aid in the recognition of their veridical counterparts at a later time. We examined whether this effect would emerge with frontal rather than three-quarter views, after very brief exposure to caricatures during the learning phase and after modest rotations of faces during the recognition phase. Results indicate that, even under these difficult training conditions, people are more accurate at recognizing unaltered faces if they are first familiarized with caricatures of the faces, rather than with the unaltered faces. These preliminary findings support the use of caricatures in new training methods to improve face recognition. In the second study, we incorporated texture into our 3D models, which allowed us to generate photorealistic renderings. In this study, we sought to determine the extent to which familiarization with caricaturized faces could also be used to reduce other-race effects (e.g., the phenomenon whereby faces from other races appear less distinct than faces from our own race). Using an old/new face recognition paradigm, Caucasian participants were first familiarized with a set of faces from multiple races, and then asked to recognize those faces among a set of confounders. Participants who were familiarized with and then asked to recognize veridical versions of the faces showed a significant other-race effect on Indian faces. In contrast, participants who were familiarized with caricaturized versions of the same faces, and then asked to recognize their veridical versions, showed no other-race effects on Indian faces. This result suggests that caricaturization may be used to help individuals focus their attention to features that are useful for recognition of other-race faces. The third and final experiment investigated the practical application of our earlier results. Since 3D facial scans are not generally available, here we also sought to determine whether 3D reconstructions from 2D frontal images could be used for the same purpose. Using the same old/new face recognition paradigm, participants who were familiarized with reconstructed faces and then asked to recognize the ground truth versions of the faces showed a significant reduction in performance compared to the previous study. In addition, participants who were familiarized with caricatures of reconstructed versions, and then asked to recognize their corresponding ground truth versions, showed a larger reduction in performance. Our results suggest that, despite the high level of photographic realism achieved by current 3D facial reconstruction methods, additional research is needed in order to reduce reconstruction errors and capture the distinctive facial traits of an individual. These results are critical for the development of training tools based on computer-generated photorealistic caricatures from “mug shot” images

    Recognizing Own- and Other-race Faces: Cognitive Mechanisms Underlying the Other-Race Effect

    Get PDF
    Other-race faces are discriminated and recognized less accurately than own-race faces. The other-race effect (ORE) emerges during infancy and is robust across different participant populations and a variety of methodologies (Meissner & Brigham, 2001). Decades of research has been successful in characterizing the roots of the ORE, however certain aspects regarding the nature of own- and other-race face representations remain unspecified. The present dissertation attempts to find the commonalities and differences in the processing of own- vs. other-race faces so as to develop an integrative understanding of the ORE in face recognition. In Study 1, I demonstrated that the ORE is attributable to an impaired ability to recognize other-race faces despite variability in appearance. In Study 2, I further examined whether this ability is influenced by familiarity. The ORE disappears for familiar faces, suggesting a fundamental difference in the familiar and unfamiliar other-race face recognition. Study 3 was designed to directly test whether the ORE is attributable to a less refined representation of other-race faces in face space. Adults are more sensitive to deviations from normality in own- than other-race faces, and between-rater variability in attractiveness rating of individual faces is higher for other- than own-race faces. In Study 4, I investigated whether the ORE is driven by the different use of shape and texture cues. Despite an overall ORE, the transition from idiosyncratic shape to texture cues was comparable for own- and other-race faces, suggesting that the different utilization of shape and texture cues does not contribute to the ORE. In Study 5, applying a novel continuous-response paradigm, I investigated how the representations of own- and other-race face are stored in visual working memory (VWM). Following ample encoding time, the ORE is attributable to differences in the probability of a face being maintained in VWM. Reducing encoding time caused a loss of precision of VWM for other- but not own-race faces. Collectively, the results of this dissertation help elucidate the nature of representations of own- and other-race faces and clarify the role of perceptual experience in shaping our ability to recognize own- and other-race faces

    Applying psychology to forensic facial identification: perception and identification of facial composite images and facial image comparison

    Get PDF
    Eyewitness recognition is acknowledged to be prone to error but there is less understanding of difficulty in discriminating unfamiliar faces. This thesis examined the effects of face perception on identification of facial composites, and on unfamiliar face image comparison. Facial composites depict face memories by reconstructing features and configurations to form a likeness. They are generally reconstructed from an unfamiliar face memory, and will be unavoidably flawed. Identification will require perception of any accurate features, by someone who is familiar with the suspect and performance is typically poor. In typical face perception, face images are processed efficiently as complete units of information. Chapter 2 explored the possibility that holistic processing of inaccurate composite configurations will impair identification of individual features. Composites were split below the eyes and misaligned to impair holistic analysis (cf. Young, Hellawell, & Jay, 1987); identification was significantly enhanced, indicating that perceptual expertise with inaccurate configurations exerts powerful effects that can be reduced by enabling featural analysis. Facial composite recognition is difficult, which means that perception and judgement will be influence by an affective recognition bias: smiles enhance perceived familiarity, while negative expressions produce the opposite effect. In applied use, facial composites are generally produced from unpleasant memories and will convey negative expression; affective bias will, therefore, be important for facial composite recognition. Chapter 3 explored the effect of positive expression on composite identification: composite expressions were enhanced, and positive affect significantly increased identification. Affective quality rather than expression strength mediated the effect, with subtle manipulations being very effective. Facial image comparison (FIC) involves discrimination of two or more face images. Accuracy in unfamiliar face matching is typically in the region of 70%, and as discrimination is difficult, may be influenced by affective bias. Chapter 4 explored the smiling face effect in unfamiliar face matching. When multiple items were compared, positive affect did not enhance performance and false positive identification increased. With a delayed matching procedure, identification was not enhanced but in contrast to face recognition and simultaneous matching, positive affect improved rejection of foil images. Distinctive faces are easier to discriminate. Chapter 5 evaluated a systematic caricature transformation as a means to increase distinctiveness and enhance discrimination of unfamiliar faces. Identification of matching face images did not improve, but successful rejection of non-matching items was significantly enhanced. Chapter 6 used face matching to explore the basis of own race bias in face perception. Other race faces were manipulated to show own race facial variation, and own race faces to show African American facial variation. When multiple face images were matched simultaneously, the transformation impaired performance for all of the images; but when images were individually matched, the transformation improved perception of other race faces and discrimination of own race faces declined. Transformation of Japanese faces to show own race dimensions produced the same pattern of effects but failed to reach significance. The results provide support for both perceptual expertise and featural processing theories of own race bias. Results are interpreted with reference to face perception theories; implications for application and future study are discussed

    Shape classification: towards a mathematical description of the face

    Get PDF
    Recent advances in biostereometric techniques have led to the quick and easy acquisition of 3D data for facial and other biological surfaces. This has led facial surgeons to express dissatisfaction with landmark-based methods for analysing the shape of the face which use only a small part of the data available, and to seek a method for analysing the face which maximizes the use of this extensive data set. Scientists working in the field of computer vision have developed a variety of methods for the analysis and description of 2D and 3D shape. These methods are reviewed and an approach, based on differential geometry, is selected for the description of facial shape. For each data point, the Gaussian and mean curvatures of the surface are calculated. The performance of three algorithms for computing these curvatures are evaluated for mathematically generated standard 3D objects and for 3D data obtained from an optical surface scanner. Using the signs of these curvatures, the face is classified into eight 'fundamental surface types' - each of which has an intuitive perceptual meaning. The robustness of the resulting surface type description to errors in the data is determined together with its repeatability. Three methods for comparing two surface type descriptions are presented and illustrated for average male and average female faces. Thus a quantitative description of facial change, or differences between individual's faces, is achieved. The possible application of artificial intelligence techniques to automate this comparison is discussed. The sensitivity of the description to global and local changes to the data, made by mathematical functions, is investigated. Examples are given of the application of this method for describing facial changes made by facial reconstructive surgery and implications for defining a basis for facial aesthetics using shape are discussed. It is also applied to investigate the role played by the shape of the surface in facial recognition

    A statistically rigorous approach to the aging of the human face

    Get PDF
    The ability to accurately age an image of the human face in an automatic and rigorous fashion has widespread potential applications. This thesis is concerned with the development and testing of a new approach to computerised age progression based on a statistical learning procedure. The thesis begins with an overview of existing methodologies for age progression and outlines the need for improved procedures. After a review of the underpinning mathematical techniques, the theoretical basis of the new age-progression methodology is then presented. In this new approach, age progression is achieved through the calculation of optimised trajectories within a model space constructed from a principal component analysis of the shape and texture of a training sample of images. The statistical framework proposed extends naturally to include both generic and person-specific influences on the changes in facial appearance as aging progresses. Specific, physiological developmental periods, facial appearance at a previous age and the tendency to resemble close relatives are all incorporated into the model. The methodology is then computationally implemented and tested. Quantitative and perceptual tests both confirm the essential validity and accuracy of the techniques. This new methodology demonstrates that near photographic-quality, age-progressed images may be obtained based on rigorous scientific principles and considerably more quickly than is currently possible using forensic artistry. It is concluded that the algorithms may, in the future, be used to augment or even replace the existing artistic methodology
    corecore