6,218 research outputs found
An Actor-Centric Approach to Facial Animation Control by Neural Networks For Non-Player Characters in Video Games
Game developers increasingly consider the degree to which character animation emulates facial expressions found in cinema. Employing animators and actors to produce cinematic facial animation by mixing motion capture and hand-crafted animation is labor intensive and therefore expensive. Emotion corpora and neural network controllers have shown promise toward developing autonomous animation that does not rely on motion capture. Previous research and practice in disciplines of Computer Science, Psychology and the Performing Arts have provided frameworks on which to build a workflow toward creating an emotion AI system that can animate the facial mesh of a 3d non-player character deploying a combination of related theories and methods. However, past investigations and their resulting production methods largely ignore the emotion generation systems that have evolved in the performing arts for more than a century. We find very little research that embraces the intellectual process of trained actors as complex collaborators from which to understand and model the training of a neural network for character animation. This investigation demonstrates a workflow design that integrates knowledge from the performing arts and the affective branches of the social and biological sciences. Our workflow begins at the stage of developing and annotating a fictional scenario with actors, to producing a video emotion corpus, to designing training and validating a neural network, to analyzing the emotion data annotation of the corpus and neural network, and finally to determining resemblant behavior of its autonomous animation control of a 3d character facial mesh. The resulting workflow includes a method for the development of a neural network architecture whose initial efficacy as a facial emotion expression simulator has been tested and validated as substantially resemblant to the character behavior developed by a human actor
First impressions: A survey on vision-based apparent personality trait analysis
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft
Do Emotions Have an Effect on Brand Versus Non Branded Cold Green Tea Drinks?
Introduction The objectives were to assess consumers’ emotional valence in response to drinking canned green tea, and assess effects of brand identification. Corollary objectives were to determine triangulated relationships across qualitative and quantitative approaches. Methods 61 panelists evaluated identical tea samples: 27 were informed of the brand, 34 received tea without branding. Responses of panelists were assessed by self-report with the EsSense25 emotional profile tool, instrumental FaceReader, and qualitative open-ended interviews.
Results For FaceReader (0-1 scale), top mean scores were: Happy (0.98), Surprised (.59) and Disgusted (.50). When controlling for age and gender, branded has a significant positive association with FaceReader Happy (p=.032). The top 3 (Likert 1-5) mean descriptive scores for EsSense25 emotional valences were: Good (3.56), Satisfied (3.47) and Pleasant (3 .42). Strongest significant correlations among EsSense25 and FaceReader were negative associations between FaceReader’s measurement of Happy with the EsSense25 measurements of Aggressive (p=.004), Wild (p=.022), and Worried (p=.030). Five thematic elements uncovered from interviews potentially elucidated quantitative findings; 70% of branded participants recalled Memories (n=19) versus 38% of unbranded participants recalled memories (n=13). The interviews also revealed 64.7% of branded participants associated the product with its Flavor (n=22) versus 67% of branded participants (n=19). Responses from 22% of the branded group addressed canned green tea associations with Cost or pricing (n=6); the unbranded group were excluded from Cost questioning because these participants were blinded from knowing the actual product and had no arbitrary statements about costs recorded from interviews with this group. In Can Imagery, 37% of only the branded group (n=10) commented on the can; non-branded participants were blinded to the green tea product and references about other drinks (not used in the study) were excluded from the imagery theme. One of the interesting qualitative findings was 0% or (n=0) branded group participants mentioned the Health benefits of the product, while 26% of unbranded participants (n=9) mentioned the health benefits of the green tea.
Discussion and Conclusion The results from FaceReader from both branded and unbranded group participants suggest that the visualization of green tea produced ambivalence. FaceReader was able to uncover a significant positive association when controlling for age and gender in the branded group with FaceReader “Happy”. This age and gender group suggests that there is an emotional connection with the flavor and memorable experience that motivates consumers’ choices. The EsSense25 results showed strong positive emotional scores of Good, Satisfied and Pleasant for both groups that they were satisfied with the green tea product as a whole. The significant inverse relationship between FaceReader Happy” outputs and self-reported outputs of Aggressive , Wild , and Worried in EsSense25 may provide elucidation of the nuances of the emotional outputs recorded by FaceReader. The qualitative thematic elements demonstrated that nostalgia influences product appreciation; branding, at least in the present study, had no effect on taste other than being (satisfied) overall; the brand product was associated with being cheap; the can affected consumer desire for the product; and only unbranded tea was associated with health
Recommended from our members
Investigation of an emotional virtual human modelling method
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.In order to simulate virtual humans more realistically and enable them life-like behaviours, several exploration research on emotion calculation, synthetic perception, and decision making process have been discussed. A series of sub-modules have been designed and simulation results have been presented with discussion.
A visual based synthetic perception system has been proposed in this thesis, which allows virtual humans to detect the surrounding virtual environment through a collision-based synthetic vision system. It enables autonomous virtual humans to change their emotion states according to stimuli in real time. The synthetic perception system also allows virtual humans to remember limited information within their own First-in-first-out short-term virtual memory.
The new emotion generation method includes a novel hierarchical emotion structure and a group of emotion calculation equations, which enables virtual humans to perform emotionally in real-time according to their internal and external factors. Emotion calculation equations used in this research were derived from psychologic emotion measurements. Virtual humans can utilise the information in virtual memory and emotion calculation equations to generate their own numerical emotion states within the hierarchical emotion structure. Those emotion states are important internal references for virtual humans to adopt appropriate behaviours and also key cues for their decision making.
The work introduces a dynamic emotional motion database structure for virtual human modelling. When developing realistic virtual human behaviours, lots of subjects were motion-captured whilst performing emotional motions with or without intent. The captured motions were endowed to virtual characters and implemented in different virtual scenarios to help evoke and verify design ideas, possible consequences of simulation (such as fire evacuation).
This work also introduced simple heuristics theory into decision making process in order to make the virtual human’s decision making more like real human. Emotion values are proposed as a group of the key cues for decision making under the simple heuristic structures. A data interface which connects the emotion calculation and the decision making structure together has also been designed for the simulation system
Perception of meaning and usage motivations of emoticons among Americans and Chinese users
Do people of different cultures agree on the meaning and use of emoticons? This study addresses this question from an inter-cultural perspective and explores the use of emoticons in the American and Chinese Computer-mediated communication (CMC) communities. The research indicates that both the Americans and Chinese participants use emoticons for entertaining, informational and social interaction motivations but the Americans are more likely to use emoticons for information motivations than the Chinese and the Chinese participants are more likely to use emoticons for social interactions than the Americans participants. The results correspond to the cultural differences between the two countries in low-/ high-context and individualism/collectivism dimensions. Moreover, the results also show that the Americans and the Chinese disagree on the meaning of most emoticons used in the study
Emotions, behaviour and belief regulation in an intelligent guide with attitude
Abstract unavailable please refer to PD
- …