15 research outputs found

    3D Face Synthesis Driven by Personality Impression

    Full text link
    Synthesizing 3D faces that give certain personality impressions is commonly needed in computer games, animations, and virtual world applications for producing realistic virtual characters. In this paper, we propose a novel approach to synthesize 3D faces based on personality impression for creating virtual characters. Our approach consists of two major steps. In the first step, we train classifiers using deep convolutional neural networks on a dataset of images with personality impression annotations, which are capable of predicting the personality impression of a face. In the second step, given a 3D face and a desired personality impression type as user inputs, our approach optimizes the facial details against the trained classifiers, so as to synthesize a face which gives the desired personality impression. We demonstrate our approach for synthesizing 3D faces giving desired personality impressions on a variety of 3D face models. Perceptual studies show that the perceived personality impressions of the synthesized faces agree with the target personality impressions specified for synthesizing the faces. Please refer to the supplementary materials for all results.Comment: 8pages;6 figure

    An intuitive control space for material appearance

    Get PDF
    Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this paper, we present an intuitive control space for predictable editing of captured BRDF data, which allows for artistic creation of plausible novel material appearances, bypassing the difficulty of acquiring novel samples. We first synthesize novel materials, extending the existing MERL dataset up to 400 mathematically valid BRDFs. We then design a large-scale experiment, gathering 56,000 subjective ratings on the high-level perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals mapping the perceptual attributes to an underlying PCA-based representation of BRDFs. We show that our functionals are excellent predictors of the perceived attributes of appearance. Our control space enables many applications, including intuitive material editing of a wide range of visual properties, guidance for gamut mapping, analysis of the correlation between perceptual attributes, or novel appearance similarity metrics. Moreover, our methodology can be used to derive functionals applicable to classic analytic BRDF representations. We release our code and dataset publicly, in order to support and encourage further research in this direction

    An intuitive control space for material appearance

    Get PDF
    Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this paper, we present an intuitive control space for predictable editing of captured BRDF data, which allows for artistic creation of plausible novel material appearances, bypassing the difficulty of acquiring novel samples. We first synthesize novel materials, extending the existing MERL dataset up to 400 mathematically valid BRDFs. We then design a large-scale experiment, gathering 56,000 subjective ratings on the high-level perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals mapping the perceptual attributes to an underlying PCA-based representation of BRDFs. We show that our functionals are excellent predictors of the perceived attributes of appearance. Our control space enables many applications, including intuitive material editing of a wide range of visual properties, guidance for gamut mapping, analysis of the correlation between perceptual attributes, or novel appearance similarity metrics. Moreover, our methodology can be used to derive functionals applicable to classic analytic BRDF representations. We release our code and dataset publicly, in order to support and encourage further research in this direction

    Nonverbal communication in virtual reality: Nodding as a social signal in virtual interactions

    Get PDF
    Nonverbal communication is an important part of human communication, including head nodding, eye gaze, proximity and body orientation. Recent research has identified specific patterns of head nodding linked to conversation, namely mimicry of head movements at 600 ms delay and fast nodding when listening. In this paper, we implemented these head nodding behaviour rules in virtual humans, and we tested the impact of these behaviours, and whether they lead to increases in trust and liking towards the virtual humans. We use Virtual Reality technology to simulate a face-to-face conversation, as VR provides a high level of immersiveness and social presence, very similar to face-to-face interaction. We then conducted a study with human-subject participants, where the participants took part in conversations with two virtual humans and then rated the virtual character social characteristics, and completed an evaluation of their implicit trust in the virtual human. Results showed more liking for and more trust in the virtual human whose nodding behaviour was driven by realistic behaviour rules. This supports the psychological models of nodding and advances our ability to build realistic virtual humans

    Nonverbal communication in virtual reality: Nodding as a social signal in virtual interactions

    Get PDF
    Nonverbal communication is an important part of human communication, including head nodding, eye gaze, proximity and body orientation. Recent research has identified specific patterns of head nodding linked to conversation, namely mimicry of head movements at 600 ms delay and fast nodding when listening. In this paper, we implemented these head nodding behaviour rules in virtual humans, and we tested the impact of these behaviours, and whether they lead to increases in trust and liking towards the virtual humans. We use Virtual Reality technology to simulate a face-to-face conversation, as VR provides a high level of immersiveness and social presence, very similar to face-to-face interaction. We then conducted a study with human-subject participants, where the participants took part in conversations with two virtual humans and then rated the virtual character social characteristics, and completed an evaluation of their implicit trust in the virtual human. Results showed more liking for and more trust in the virtual human whose nodding behaviour was driven by realistic behaviour rules. This supports the psychological models of nodding and advances our ability to build realistic virtual humans

    Differential effects of face-realism and emotion on event-related brain potentials and their implications for the uncanny valley theory

    Get PDF
    Schindler S, Zell E, Botsch M, Kißler J. Differential effects of face-realism and emotion on event-related brain potentials and their implications for the uncanny valley theory. Scientific Reports. 2017;7(1): 45003.Cartoon characters are omnipresent in popular media. While few studies have scientifically investigated their processing, in computer graphics, efforts are made to increase realism. Yet, close approximations of reality have been suggested to evoke sometimes a feeling of eeriness, the “uncanny valley” effect. Here, we used high-density electroencephalography to investigate brain responses to professionally stylized happy, angry, and neutral character faces. We employed six face-stylization levels varying from abstract to realistic and investigated the N170, early posterior negativity (EPN), and late positive potential (LPP) event-related components. The face-specific N170 showed a u-shaped modulation, with stronger reactions towards both most abstract and most realistic compared to medium-stylized faces. For abstract faces, N170 was generated more occipitally than for real faces, implying stronger reliance on structural processing. Although emotional faces elicited highest amplitudes on both N170 and EPN, on the N170 realism and expression interacted. Finally, LPP increased linearly with face realism, reflecting activity increase in visual and parietal cortex for more realistic faces. Results reveal differential effects of face stylization on distinct face processing stages and suggest a perceptual basis to the uncanny valley hypothesis. They are discussed in relation to face perception, media design, and computer graphics

    Digital Manipulation of Human Faces: Effects on Emotional Perception and Brain Activity

    Get PDF
    The study of human face-processing has granted insight into key adaptions across various social and biological functions. However, there is an overall lack of consistency regarding digital alteration styles of human-face stimuli. In order to investigate this, two independent studies were conducted examining unique effects of image construction and presentation. In the first study, three primary forms of stimuli presentation styles (color, black and white, cutout) were used across iterations of non-thatcherized/thatcherized and non-inverted/inverted presentations. Outcome measures included subjective reactions measured via ratings of perceived “grotesqueness,” and objective outcomes of N170 event-related potentials (ERPs) measured via encephalography. Results of subjective measures indicated that thatcherized images were associated with an increased level of grotesque perception, regardless of overall condition variant and inversion status. A significantly larger N170 component was found in response to cutout-style images of human faces, thatcherized images, and inverted images. Results suggest that cutout image morphology may be considered a well-suited image presentation style when examining ERPs and facial processing of otherwise unaltered human faces. Moreover, less emphasis can be placed on decision making regarding main condition morphology of human face stimuli as it relates to negatively valent reactions. The second study explored commonalities between thatcherized and uncanny images. The purpose of the study was to explore commonalities between these two styles of digital manipulation and establish a link between previously disparate areas of human-face processing research. Subjective reactions to stimuli were measured via participant ratings of “off-putting.” ERP data were gathered in order to explore if any unique effects emerged via N170 and N400 presentations. Two main “morph continuums” of stimuli, provided by Eduard Zell (see Zell et al., 2015), with uncanny features were utilized. A novel approach of thatcherizing images along these continuums was used. thatcherized images across both continuums were regarded as more off-putting than non-thatcherized images, indicating a robust subjective effect of thatcherization that was relatively unimpacted by additional manipulation of key featural components. Conversely, results from brain activity indicated no significant differences of N170 between level of shape stylization and their thatcherized counterparts. Unique effects between continuums and exploratory N400 results are discussed

    To Stylize or not to Stylize? The Effect of Shape and Material Stylization on the Perception of Computer-Generated Faces

    Get PDF
    Zell E, Aliaga C, Jarabo A, et al. To Stylize or not to Stylize? The Effect of Shape and Material Stylization on the Perception of Computer-Generated Faces. ACM Transactions on Graphics. 2015;34(6):184:1-184:12.Virtual characters contribute strongly to the entire visuals of 3D animated films. However, designing believable characters remains a challenging task. Artists rely on stylization to increase appeal or expressivity, exaggerating or softening specific features. In this paper we analyze two of the most influential factors that define how a character looks: shape and material. With the help of artists, we design a set of carefully crafted stimuli consisting of different stylization levels for both parameters, and analyze how different combinations affect the perceived realism, appeal, eeriness, and familiarity of the characters. Moreover, we additionally investigate how this affects the perceived intensity of different facial expressions (sadness, anger, happiness, and surprise). Our experiments reveal that shape is the dominant factor when rating realism and expression intensity, while material is the key component for appeal. Furthermore our results show that realism alone is a bad predictor for appeal, eeriness, or attractiveness

    A meta-analysis of the uncanny valley's independent and dependent variables

    Get PDF
    The uncanny valley (UV) effect is a negative affective reaction to human-looking artificial entities. It hinders comfortable, trust-based interactions with android robots and virtual characters. Despite extensive research, a consensus has not formed on its theoretical basis or methodologies. We conducted a meta-analysis to assess operationalizations of human likeness (independent variable) and the UV effect (dependent variable). Of 468 studies, 72 met the inclusion criteria. These studies employed 10 different stimulus creation techniques, 39 affect measures, and 14 indirect measures. Based on 247 effect sizes, a three-level meta-analysis model revealed the UV effect had a large effect size, Hedges’ g = 1.01 [0.80, 1.22]. A mixed-effects meta-regression model with creation technique as the moderator variable revealed face distortion produced the largest effect size, g = 1.46 [0.69, 2.24], followed by distinct entities, g = 1.20 [1.02, 1.38], realism render, g = 0.99 [0.62, 1.36], and morphing, g = 0.94 [0.64, 1.24]. Affective indices producing the largest effects were threatening, likable, aesthetics, familiarity, and eeriness, and indirect measures were dislike frequency, categorization reaction time, like frequency, avoidance, and viewing duration. This meta-analysis—the first on the UV effect—provides a methodological foundation and design principles for future research

    A Meta-analysis of the Uncanny Valley's Independent and Dependent Variables

    Get PDF
    The uncanny valley (UV) effect is a negative affective reaction to human-looking artificial entities. It hinders comfortable, trust-based interactions with android robots and virtual characters. Despite extensive research, a consensus has not formed on its theoretical basis or methodologies. We conducted a meta-analysis to assess operationalizations of human likeness (independent variable) and the UV effect (dependent variable). Of 468 studies, 72 met the inclusion criteria. These studies employed 10 different stimulus creation techniques, 39 affect measures, and 14 indirect measures. Based on 247 effect sizes, a three-level meta-analysis model revealed the UV effect had a large effect size, Hedges’ g = 1.01 [0.80, 1.22]. A mixed-effects meta-regression model with creation technique as the moderator variable revealed face distortion produced the largest effect size, g = 1.46 [0.69, 2.24], followed by distinct entities, g = 1.20 [1.02, 1.38], realism render, g = 0.99 [0.62, 1.36], and morphing, g = 0.94 [0.64, 1.24]. Affective indices producing the largest effects were threatening, likable, aesthetics, familiarity, and eeriness, and indirect measures were dislike frequency, categorization reaction time, like frequency, avoidance, and viewing duration. This meta-analysis—the first on the UV effect—provides a methodological foundation and design principles for future research
    corecore