159 research outputs found

    Mean value coordinates–based caricature and expression synthesis

    Get PDF
    We present a novel method for caricature synthesis based on mean value coordinates (MVC). Our method can be applied to any single frontal face image to learn a specified caricature face pair for frontal and 3D caricature synthesis. This technique only requires one or a small number of exemplar pairs and a natural frontal face image training set, while the system can transfer the style of the exemplar pair across individuals. Further exaggeration can be fulfilled in a controllable way. Our method is further applied to facial expression transfer, interpolation, and exaggeration, which are applications of expression editing. Additionally, we have extended our approach to 3D caricature synthesis based on the 3D version of MVC. With experiments we demonstrate that the transferred expressions are credible and the resulting caricatures can be characterized and recognized

    A Leopard Cannot Change Its Spots: Improving Face Recognition Using 3D-based Caricatures

    Get PDF
    Caricatures refer to a representation of aperson in which the distinctive features are deliberatelyexaggerated, with several studies showing that humansperform better at recognizing people from caricaturesthan using original images. Inspired by this observa-tion, this paper introduces the first fully automatedcaricature-based face recognition approach capable ofworking with data acquired in the wild. Our approachleverages the 3D face structure from a single 2D imageand compares it to a reference model for obtaininga compact representation of face features deviations.This descriptor is subsequently deformed using a ’mea-sure locally, weight globally’ strategy to resemble thecaricature drawing process. The deformed deviationsare incorporated in the 3D model using the Laplacianmesh deformation algorithm, and the 2D face cari-cature image is obtained by projecting the deformedmodel in the original camera-view. To demonstratethe advantages of caricature-based face recognition, wetrain the VGG-Face network from scratch using eitheroriginal face images (baseline) or caricatured images,and use these models for extracting face descriptorsfrom the LFW, IJB-A and MegaFace datasets. The ex-periments show an increase in the recognition accuracywhen using caricatures rather than original images.Moreover, our approach achieves competitive resultswith state-of-the-art face recognition methods, evenwithout explicitly tuning the network for any of theevaluation sets.info:eu-repo/semantics/publishedVersio

    MW-GAN: Multi-warping GAN for caricature generation with multi-style geometric exaggeration

    Get PDF
    Given an input face photo, the goal of caricature generation is to produce stylized, exaggerated caricatures that share the same identity as the photo. It requires simultaneous style transfer and shape exaggeration with rich diversity, and meanwhile preserving the identity of the input. To address this challenging problem, we propose a novel framework called Multi-Warping GAN (MW-GAN), including a style network and a geometric network that are designed to conduct style transfer and geometric exaggeration respectively. We bridge the gap between the style/landmark space and their corresponding latent code spaces by a dual way design, so as to generate caricatures with arbitrary styles and geometric exaggeration, which can be specified either through random sampling of latent code or from a given caricature sample. Besides, we apply identity preserving loss to both image space and landmark space, leading to a great improvement in quality of generated caricatures. Experiments show that caricatures generated by MW-GAN have better quality than existing methods

    Towards the Development of Training Tools for Face Recognition

    Get PDF
    Distinctiveness plays an important role in the recognition of faces, i.e., a distinctive face is usually easier to remember than a typical face in a recognition task. This distinctiveness effect explains why caricatures are recognized faster and more accurately than unexaggerated (i.e., veridical) faces. Furthermore, using caricatures during training can facilitate recognition of a person’s face at a later time. The objective of this thesis is to determine the extent to which photorealistic computer-generated caricatures may be used in training tools to improve recognition of faces by humans. To pursue this objective, we developed a caricaturization procedure for three-dimensional (3D) face models, and characterized face recognition performance (by humans) through a series of perceptual studies. The first study focused on 3D shape information without texture. Namely, we tested whether exposure to caricatures during an initial familiarization phase would aid in the recognition of their veridical counterparts at a later time. We examined whether this effect would emerge with frontal rather than three-quarter views, after very brief exposure to caricatures during the learning phase and after modest rotations of faces during the recognition phase. Results indicate that, even under these difficult training conditions, people are more accurate at recognizing unaltered faces if they are first familiarized with caricatures of the faces, rather than with the unaltered faces. These preliminary findings support the use of caricatures in new training methods to improve face recognition. In the second study, we incorporated texture into our 3D models, which allowed us to generate photorealistic renderings. In this study, we sought to determine the extent to which familiarization with caricaturized faces could also be used to reduce other-race effects (e.g., the phenomenon whereby faces from other races appear less distinct than faces from our own race). Using an old/new face recognition paradigm, Caucasian participants were first familiarized with a set of faces from multiple races, and then asked to recognize those faces among a set of confounders. Participants who were familiarized with and then asked to recognize veridical versions of the faces showed a significant other-race effect on Indian faces. In contrast, participants who were familiarized with caricaturized versions of the same faces, and then asked to recognize their veridical versions, showed no other-race effects on Indian faces. This result suggests that caricaturization may be used to help individuals focus their attention to features that are useful for recognition of other-race faces. The third and final experiment investigated the practical application of our earlier results. Since 3D facial scans are not generally available, here we also sought to determine whether 3D reconstructions from 2D frontal images could be used for the same purpose. Using the same old/new face recognition paradigm, participants who were familiarized with reconstructed faces and then asked to recognize the ground truth versions of the faces showed a significant reduction in performance compared to the previous study. In addition, participants who were familiarized with caricatures of reconstructed versions, and then asked to recognize their corresponding ground truth versions, showed a larger reduction in performance. Our results suggest that, despite the high level of photographic realism achieved by current 3D facial reconstruction methods, additional research is needed in order to reduce reconstruction errors and capture the distinctive facial traits of an individual. These results are critical for the development of training tools based on computer-generated photorealistic caricatures from “mug shot” images
    • …
    corecore