658 research outputs found

    Psychophysical investigation of facial expressions using computer animated faces

    No full text
    The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness

    Toward a social psychophysics of face communication

    Get PDF
    As a highly social species, humans are equipped with a powerful tool for social communication—the face, which can elicit multiple social perceptions in others due to the rich and complex variations of its movements, morphology, and complexion. Consequently, identifying precisely what face information elicits different social perceptions is a complex empirical challenge that has largely remained beyond the reach of traditional research methods. More recently, the emerging field of social psychophysics has developed new methods designed to address this challenge. Here, we introduce and review the foundational methodological developments of social psychophysics, present recent work that has advanced our understanding of the face as a tool for social communication, and discuss the main challenges that lie ahead

    Idiosyncratic body motion influences person recognition

    Get PDF
    Person recognition is an important human ability. The main source of information we use to recognize people is the face. However, there is a variety of other information that contributes to person recognition, and the face is almost exclusively perceived in the presence of a moving body. Here, we used recent motion capture and computer animation techniques to quantitatively explore the impact of body motion on person recognition. Participants were familiarized with two animated avatars each performing the same basic sequence of karate actions with slight idiosyncratic differences in the body movements. The body of both avatars was the same, but they differed in their facial identity and body movements. In a subsequent recognition task, participants saw avatars whose facial identity consisted of morphs between the learned individuals. Across trials, each avatar was seen animated with sequences taken from both of the learned movement patterns. Participants were asked to judge the identity of the avatars. The avatars that contained the two original heads were predominantly identified by their facial identity regardless of body motion. More importantly however, participants identified the ambiguous avatar primarily based on its body motion. This clearly shows that body motion can affect the perception of identity. Our results also highlight the importance of taking into account the face in the context of a body rather than solely concentrating on facial information for person recognition.peer-reviewe

    The Evaluation of Stylized Facial Expressions

    No full text
    Stylized rendering aims to abstract information in an image making it useful not only for artistic but also for visualization purposes. Recent advances in computer graphics techniques have made it possible to render many varieties of stylized imagery efficiently. So far, however, few attempts have been made to characterize the perceptual impact and effectiveness of stylization. In this paper, we report several experiments that evaluate three different stylization techniques in the context of dynamic facial expressions. Going beyond the usual questionnaire approach, the experiments compare the techniques according to several criteria ranging from introspective measures (subjective preference) to task-dependent measures (recognizability, intensity). Our results shed light on how stylization of image contents affects the perception and subjective evaluation of facial expressions

    Computational Modeling of Facial Response for Detecting Differential Traits in Autism Spectrum Disorders

    Get PDF
    This dissertation proposes novel computational modeling and computer vision methods for the analysis and discovery of differential traits in subjects with Autism Spectrum Disorders (ASD) using video and three-dimensional (3D) images of face and facial expressions. ASD is a neurodevelopmental disorder that impairs an individual’s nonverbal communication skills. This work studies ASD from the pathophysiology of facial expressions which may manifest atypical responses in the face. State-of-the-art psychophysical studies mostly employ na¨ıve human raters to visually score atypical facial responses of individuals with ASD, which may be subjective, tedious, and error prone. A few quantitative studies use intrusive sensors on the face of the subjects with ASD, which in turn, may inhibit or bias the natural facial responses of these subjects. This dissertation proposes non-intrusive computer vision methods to alleviate these limitations in the investigation for differential traits from the spontaneous facial responses of individuals with ASD. Two IRB-approved psychophysical studies are performed involving two groups of age-matched subjects: one for subjects diagnosed with ASD and the other for subjects who are typically-developing (TD). The facial responses of the subjects are computed from their facial images using the proposed computational models and then statistically analyzed to infer about the differential traits for the group with ASD. A novel computational model is proposed to represent the large volume of 3D facial data in a small pose-invariant Frenet frame-based feature space. The inherent pose-invariant property of the proposed features alleviates the need for an expensive 3D face registration in the pre-processing step. The proposed modeling framework is not only computationally efficient but also offers competitive performance in 3D face and facial expression recognition tasks when compared with that of the state-ofthe-art methods. This computational model is applied in the first experiment to quantify subtle facial muscle response from the geometry of 3D facial data. Results show a statistically significant asymmetry in specific pair of facial muscle activation (p\u3c0.05) for the group with ASD, which suggests the presence of a psychophysical trait (also known as an ’oddity’) in the facial expressions. For the first time in the ASD literature, the facial action coding system (FACS) is employed to classify the spontaneous facial responses based on facial action units (FAUs). Statistical analyses reveal significantly (p\u3c0.01) higher prevalence of smile expression (FAU 12) for the ASD group when compared with the TD group. The high prevalence of smile has co-occurred with significantly averted gaze (p\u3c0.05) in the group with ASD, which is indicative of an impaired reciprocal communication. The metric associated with incongruent facial and visual responses suggests a behavioral biomarker for ASD. The second experiment shows a higher prevalence of mouth frown (FAU 15) and significantly lower correlations between the activation of several FAU pairs (p\u3c0.05) in the group with ASD when compared with the TD group. The proposed computational modeling in this dissertation offers promising biomarkers, which may aid in early detection of subtle ASD-related traits, and thus enable an effective intervention strategy in the future

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    The evocation and expression of emotion through documentary animation

    Get PDF
    How might an animator distil and study emotion? Could animation itself be a means to unlock meaning that previous experiments have not been able to access? Animation has the power to both highlight and conceal emotions as expressed through body movement and gesture. When we view live action (human interview) documentary footage, we are exposed not just to the spoken words, but the subtle nuances of body movements. How much might be lost when documentary footage is transposed into animation, or indeed, what might be gained, translated through the personal and artistic view of the animator? Drawing on my own previous experience as a games animator, now using research through practice methodology, this paper explores the results of the first of a series of animations created to explore the more subtle nuances of gesture. Though the medium of a documentary style interview, opposing topics are used to evoke strong emotions; firstly of happiness, then of sadness, with a view to accessing real rather than acted (simulated) emotions and their associated body movements

    Detecting social signals from the face

    Get PDF
    This thesis investigates our sensitivity to social signals from the face, both in health and disease, and explores some of the methodologies employed to measure them. The first set of experiments used forced choice and free naIll1ng paradigms to investigate the interpretation of a set of facial expressions by Western and Japanese participants. Performance in the forced choice task exceeded that measured in the free naming task for both cultures, but the Japanese participants were found to be particularly poor at labelling expressions of fear and disgust. The difficulties experienced with translation and interpretation in these tasks led to the development of a psychophysical paradigm which was used to measure the signalling strength of facial expressions without the need for participants to interpret what they saw. Psychophysical tasks were also used to measure sensitivity to eye gaze direction. A 'live' and screen-based task produced comparable thresholds and revealed that our sensitivity to these ocular signals was at least as good as Snellen acuity. Manipulations of the facial surround in the screen-based task revealed that the detection of gaze direction was facilitated by the presence of the facial surround and as such it can be assumed that gaze discriminations are likely to be made in conjunction with other face processing analyses. The tasks developed in these chapters were used to test two patients with bilateral amygdala damage. Patients with this brain injury have been reported to experience difficulties in the interpretation of facial and auditory signals of fear. In this thesis, their performance was found to depend on the task used to measure it. However, neither patient was found to be impaired in their ability to label fearful expressions compared to control participants. Instead, patient SE demonstrated a consistently poor performance in his ability to interpret expressions of disgust. Vll Experiments 2, 3, 4 and 5 of Chapter 3, have also been reported in Perception, 1995, Vol. 24, Supplement, pp. 14. The Face as a long distance transmitter. Jenkins, J., Craven, B. & Bruce, V. Experiments 1,2,3 and 4 of Chapter 3 were also reported in the Technical Report of the Institute of Electronics Information and Communication Engineers. HIP 96-39 (1997-03). Methods for detecting social signals from the face. Jenkins, J., Craven, B., Bruce, V., & Akamatsu, S. Experiments 2 and 5 of Chapter 3, and a selection of the patient studies from Chapter 6 were reported at the Experimental Psychology Society, Bristol meeting, 1996, and at the Applied Vision Association, Annual Meeting, April, 1996. Sensitivity to Expressive Signals from the Human Face: Psychophysical and Neuropsychological Investigations. Jenkins, J., Bruce, V., Calder, A., & Craven, B

    The evocation and expression of emotion through documentary animation

    Get PDF
    How might an animator distil and study emotion? Could animation itself be a means to unlock meaning that previous experiments have not been able to access? Animation has the power to both highlight and conceal emotions as expressed through body movement and gesture. When we view live action (human interview) documentary footage, we are exposed not just to the spoken words, but the subtle nuances of body movements. How much might be lost when documentary footage is transposed into animation, or indeed, what might be gained, translated through the personal and artistic view of the animator? Drawing on my own previous experience as a games animator, now using research through practice methodology, this paper explores the results of the first of a series of animations created to explore the more subtle nuances of gesture. Though the medium of a documentary style interview, opposing topics are used to evoke strong emotions; firstly of happiness, then of sadness, with a view to accessing real rather than acted (simulated) emotions and their associated body movements

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained
    • …
    corecore