10,812 research outputs found
Considerations for believable emotional facial expression animation
Facial expressions can be used to communicate emotional states through the use of universal signifiers within key regions of the face. Psychology research has identified what these signifiers are and how different combinations and variations can be interpreted. Research into expressions has informed animation practice, but as yet very little is known about the movement within and between emotional expressions. A better understanding of sequence, timing, and duration could better inform the production of believable animation. This paper introduces the idea of expression choreography, and how tests of observer perception might enhance our understanding of moving emotional expressions
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149â164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Psychophysical investigation of facial expressions using computer animated faces
The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness
The inaccuracy and insincerity of real faces
Since conversation is a central human activity, the synthesis of proper conversational behavior for Virtual Humans will become a critical issue. Facial expressions represent a critical part of interpersonal communication. Even with the most sophisticated, photo-realistic head model, an avatar who's behavior is unbelievable or even uninterpretable will be an inefficient or possibly counterproductive conversational partner. Synthesizing expressions can be greatly aided by a detailed description of which facial motions are perceptually necessary and sufficient. Here, we recorded eight core expressions from six trained individuals using a method-acting approach. We then psychophysically determined how recognizable and believable those expressions were. The results show that people can identify these expressions quite well, although there is some systematic confusion between particular expressions. The results also show that people found the expressions to be less than convincing. The pattern of confusions and believability ratings demonstrates that there is considerable variation in natural expressions and that even real facial expressions are not always understood or believed. Moreover, the results provide the ground work necessary to begin a more fine-grained analysis of the core components of these expressions. As some initial results from a model-based manipulation of the image sequences shows, a detailed description of facial expressions can be an invaluable aid in the synthesis of unambiguous and believable Virtual Humans
Analysis and Construction of Engaging Facial Forms and Expressions: Interdisciplinary Approaches from Art, Anatomy, Engineering, Cultural Studies, and Psychology
The topic of this dissertation is the anatomical, psychological, and cultural examination of a human face in order to effectively construct an anatomy-driven 3D virtual face customization and action model. In order to gain a broad perspective of all aspects of a face, theories and methodology from the fields of art, engineering, anatomy, psychology, and cultural studies have been analyzed and implemented. The computer generated facial customization and action model were designed based on the collected data. Using this customization system, culturally-specific attractive face in Korean popular culture, âkot-mi-nam (flower-like beautiful guy),â was modeled and analyzed as a case study. The âkot-mi-namâ phenomenon is overviewed in textual, visual, and contextual aspects, which reveals the gender- and sexuality-fluidity of its masculinity. The analysis and the actual development of the model organically co-construct each other requiring an interwoven process. Chapter 1 introduces anatomical studies of a human face, psychological theories of face recognition and an attractive face, and state-of-the-art face construction projects in the various fields. Chapter 2 and 3 present the Bezier curve-based 3D facial customization (BCFC) and Multi-layered Facial Action Model (MFAF) based on the analysis of human anatomy, to achieve a cost-effective yet realistic quality of facial animation without using 3D scanned data. In the experiments, results for the facial customization for gender, race, fat, and age showed that BCFC achieved enhanced performance of 25.20% compared to existing program Facegen , and 44.12% compared to Facial Studio. The experimental results also proved the realistic quality and effectiveness of MFAM compared with blend shape technique by enhancing 2.87% and 0.03% of facial area for happiness and anger expressions per second, respectively. In Chapter 4, according to the analysis based on BCFC, the 3D face of an average kot-mi-nam is close to gender neutral (male: 50.38%, female: 49.62%), and Caucasian (66.42-66.40%). Culturally-specific images can be misinterpreted in different cultures, due to their different languages, histories, and contexts. This research demonstrates that facial images can be affected by the cultural tastes of the makers and can also be interpreted differently by viewers in different cultures
FACSGen: A Tool to Synthesize Emotional Facial Expressions Through Systematic Manipulation of Facial Action Units
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentatio
Geometry-Aware Face Completion and Editing
Face completion is a challenging generation task because it requires
generating visually pleasing new pixels that are semantically consistent with
the unmasked face region. This paper proposes a geometry-aware Face Completion
and Editing NETwork (FCENet) by systematically studying facial geometry from
the unmasked region. Firstly, a facial geometry estimator is learned to
estimate facial landmark heatmaps and parsing maps from the unmasked face
image. Then, an encoder-decoder structure generator serves to complete a face
image and disentangle its mask areas conditioned on both the masked face image
and the estimated facial geometry images. Besides, since low-rank property
exists in manually labeled masks, a low-rank regularization term is imposed on
the disentangled masks, enforcing our completion network to manage occlusion
area with various shape and size. Furthermore, our network can generate diverse
results from the same masked input by modifying estimated facial geometry,
which provides a flexible mean to edit the completed face appearance. Extensive
experimental results qualitatively and quantitatively demonstrate that our
network is able to generate visually pleasing face completion results and edit
face attributes as well
- âŠ