4,397 research outputs found

    Mean value coordinates–based caricature and expression synthesis

    Get PDF
    We present a novel method for caricature synthesis based on mean value coordinates (MVC). Our method can be applied to any single frontal face image to learn a specified caricature face pair for frontal and 3D caricature synthesis. This technique only requires one or a small number of exemplar pairs and a natural frontal face image training set, while the system can transfer the style of the exemplar pair across individuals. Further exaggeration can be fulfilled in a controllable way. Our method is further applied to facial expression transfer, interpolation, and exaggeration, which are applications of expression editing. Additionally, we have extended our approach to 3D caricature synthesis based on the 3D version of MVC. With experiments we demonstrate that the transferred expressions are credible and the resulting caricatures can be characterized and recognized

    DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling

    Get PDF
    Face modeling has been paid much attention in the field of visual computing. There exist many scenarios, including cartoon characters, avatars for social media, 3D face caricatures as well as face-related art and design, where low-cost interactive face modeling is a popular approach especially among amateur users. In this paper, we propose a deep learning based sketching system for 3D face and caricature modeling. This system has a labor-efficient sketching interface, that allows the user to draw freehand imprecise yet expressive 2D lines representing the contours of facial features. A novel CNN based deep regression network is designed for inferring 3D face models from 2D sketches. Our network fuses both CNN and shape based features of the input sketch, and has two independent branches of fully connected layers generating independent subsets of coefficients for a bilinear face representation. Our system also supports gesture based interactions for users to further manipulate initial face models. Both user studies and numerical results indicate that our sketching system can help users create face models quickly and effectively. A significantly expanded face database with diverse identities, expressions and levels of exaggeration is constructed to promote further research and evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201

    Developing a Sufficient Knowledge Base for Faces: Implicit Recognition Memory for Distinctive versus Typical Female Faces

    Get PDF
    Research on adults' face recognition abilities provides evidence for a distinctiveness effect such that distinctive faces are remembered better and more easily than typical faces. Research on this effect in the developmental literature is limited. In the current study, two experiments tested recognition memory for evidence of the distinctiveness effect. Study 1 tested infants (9- and 10-month olds) using a novelty preference paradigm. Infants were tested for immediate and delayed memory. Results indicated memory for only the most distinctive faces. Study 2 tested preschool children (3- and 4-year-olds) using an interactive story. Children were tested with an implicit (i.e. surprise) memory test. Results indicated a memory advantage for distinctive faces by three-year-old girls and four-year-old boys and girls. Contrary to traditional theories of changes in children's processing strategies, experience is also a critical factor in the development of face recognition abilities

    Personally familiar faces: Higher precision of memory for idiosyncratic than for categorical information

    Get PDF
    Many studies have demonstrated that we can identify a familiar face on an image much better than an unfamiliar one, especially when various degradations or changes (e. g. image distortions or blurring, new illuminations) have been applied, but few have asked how different types of facial information from familiar faces are stored in memory. Here we investigated how well we remember personally familiar faces in terms of their identity, gender, and race. In three experiments, based on the faces personally familiar to our participants, we created sets of face morphs that parametrically varied the faces in terms of identity, sex or race, using a 3-dimensional morphable face model. For each familiar face, we presented those face morphs together with the original face and asked participants to pick the correct “real” face among morph distracters in each set. They were instructed to pick the face that most closely resembled their memory of that familiar person. We found that participants excelled in retrieving the correct familiar faces among the distracters when the faces were manipulated in terms of their idiosyncratic features (their identity information), but they were less sensitive to changes that occurred along the gender and race continuum. Image similarity analyses indicate that the observed difference cannot be attributed to different levels of image similarity between manipulations. These findings demonstrate that idiosyncratic and categorical face information is represented differently in memory, even for the faces of people we are very familiar with. Implications to current models of face recognition are discussed

    EMPATH: A Neural Network that Categorizes Facial Expressions

    Get PDF
    There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of "categorical perception." In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, "surprise" expressions lie between "happiness" and "fear" expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain
    • …
    corecore