1,084 research outputs found
Machine Analysis of Facial Expressions
No abstract
Machine Understanding of Human Behavior
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior
Towards facial mimicry for a virtual human
Boukricha H, Wachsmuth I. Towards facial mimicry for a virtual human. In: Reichardt D, ed. Proceedings of the 4th Workshop on Emotion and Computing - Current Research and Future Impact. 2009: 32-39.Mimicking others’ facial expressions is believed to be important in making virtual humans as more natural and believable. As result of an empirical study conducted with a virtual human a large face repertoire of about 6000 faces arranged in Pleasure Arousal Dominance (PAD-) space with respect to two dominance values (dominant vs. submissive) was obtained. Each face in the face repertoire consists of different intensities of the virtual human’s facial muscle actions called Action Units (AUs), modeled following the Facial Action Coding System (FACS). Using this face repertoire an approach towards realizing facial
mimicry for a virtual human is topic of this paper. A preliminary evaluation of this first approach is realized with the basic emotions Happy and Angry
Timing is everything: A spatio-temporal approach to the analysis of facial actions
This thesis presents a fully automatic facial expression analysis system based on the Facial Action
Coding System (FACS). FACS is the best known and the most commonly used system to describe
facial activity in terms of facial muscle actions (i.e., action units, AUs). We will present our research
on the analysis of the morphological, spatio-temporal and behavioural aspects of facial expressions.
In contrast with most other researchers in the field who use appearance based techniques, we use a
geometric feature based approach. We will argue that that approach is more suitable for analysing
facial expression temporal dynamics. Our system is capable of explicitly exploring the temporal
aspects of facial expressions from an input colour video in terms of their onset (start), apex (peak)
and offset (end).
The fully automatic system presented here detects 20 facial points in the first frame and tracks them
throughout the video. From the tracked points we compute geometry-based features which serve as
the input to the remainder of our systems. The AU activation detection system uses GentleBoost
feature selection and a Support Vector Machine (SVM) classifier to find which AUs were present in an
expression. Temporal dynamics of active AUs are recognised by a hybrid GentleBoost-SVM-Hidden
Markov model classifier. The system is capable of analysing 23 out of 27 existing AUs with high
accuracy.
The main contributions of the work presented in this thesis are the following: we have created a
method for fully automatic AU analysis with state-of-the-art recognition results. We have proposed
for the first time a method for recognition of the four temporal phases of an AU. We have build the
largest comprehensive database of facial expressions to date. We also present for the first time in the
literature two studies for automatic distinction between posed and spontaneous expressions
Aligning Figurative Paintings With Their Sources for Semantic Interpretation
This paper reports steps in probing the artistic methods of figurative painters through computational algorithms. We explore a comparative method that investigates the relation between the source of a painting, typically a photograph or an earlier painting, and the painting itself. A first crucial step in this process is to find the source and to crop, standardize and align it to the painting so that a comparison becomes possible. The next step is to apply different low-level algorithms to construct difference maps for color, edges, texture, brightness, etc. From this basis, various subsequent operations become possible to detect and compare features of the image, such as facial action units and the emotions they signify. This paper demonstrates a pipeline we have built and tested using paintings by a renowned contemporary painter Luc Tuymans. We focus in this paper particularly on the alignment process, on edge difference maps, and on the utility of the comparative method for bringing out the semantic significance of a painting
- …