14 research outputs found

    A practical approach to fuse shape and appearance information in a Gaussian facial action estimation framework

    No full text
    In many domains of computer vision, such as medical imaging and facial image analysis, it is necessary to combine shape(geometric) and appearance (texture) information. In this paper, we describe a method for combining geometric and texture-based evidence for facial actions within a Kalman filter framework. The geometric evidence is provided by a face alignment method. The texturebased evidence is provided by a set of Support Vector Machines (SVM) for various Action Units (AU). The proposed method is a practical solution to the problem of fusing categorical probabilities within a Kalman filter based state estimation framework. A first performance evaluation on upper face AUs demonstrates the practical applicability of the proposed fusion method. The method is applicable to arbitrary imaging domains, apart from facial action estimation

    Facial features underlying the decoding of pain expressions

    Get PDF
    Previous research has revealed that the face is a finely tuned medium for pain communication. Studies assessing the decoding of facial expressions of pain have revealed an interesting discrepancy, namely that, despite eyes narrowing being the most frequent facial expression accompanying pain, individuals mostly rely on brow lowering and nose wrinkling/upper lip raising to evaluate pain. The present study verifies if this discrepancy may reflect an interaction between the features coding pain expressions and the features used by observers and stored in their mental representations. Experiment 1 shows that more weight is allocated to the brow lowering and nose wrinkling/upper lip raising, supporting the idea that these features are allocated more importance when mental representations of pain expressions are stored in memory. These 2 features have been associated with negative valence and with the affective dimension of pain, whereas the eyes narrowing feature has been associated more closely with the sensory dimension of pain. However, experiment 2 shows that these 2 features remain more salient than eyes narrowing, even when attention is specifically directed toward the sensory dimension of pain. Together, these results suggest that the features most saliently coded in the mental representation of facial expressions of pain may reflect a bias toward allocating more weight to the affective information encoded in the face. Perspective: This work reveals the relative importance of 3 facial features representing the core of pain expressions during pain decoding. The results show that 2 features are over-represented; this finding may potentially be linked with the estimation biases occurring when clinicians and lay persons evaluate pain based on facial appearance. (C) 2019 by the American Pain Societ

    Automatic Coding of Facial Expressions of Pain: Are We There Yet?

    Get PDF
    Lautenbacher S, Hassan T, Seuss D, et al. Automatic Coding of Facial Expressions of Pain: Are We There Yet? Pain Research & Management. 2022;2022: 6635496.Introduction: The experience of pain is regularly accompanied by facial expressions. The gold standard for analyzing these facial expressions is the Facial Action Coding System (FACS), which provides so-called action units (AUs) as parametrical indicators of facial muscular activity. Particular combinations of AUs have appeared to be pain-indicative. The manual coding of AUs is, however, too time- and labor-intensive in clinical practice. New developments in automatic facial expression analysis have promised to enable automatic detection of AUs, which might be used for pain detection.; Objective: Our aim is to compare manual with automatic AU coding of facial expressions of pain.; Methods: FaceReader7 was used for automatic AU detection. We compared the performance of FaceReader7 using videos of 40 participants (20 younger with a mean age of 25.7 years and 20 older with a mean age of 52.1 years) undergoing experimentally induced heat pain to manually coded AUs as gold standard labeling. Percentages of correctly and falsely classified AUs were calculated, and we computed as indicators of congruency, "sensitivity/recall," "precision," and "overall agreement (F1)."; Results: The automatic coding of AUs only showed poor to moderate outcomes regarding sensitivity/recall, precision, and F1. The congruency was better for younger compared to older faces and was better for pain-indicative AUs compared to other AUs.; Conclusion: At the moment, automatic analyses of genuine facial expressions of pain may qualify at best as semiautomatic systems, which require further validation by human observers before they can be used to validly assess facial expressions of pain. Copyright © 2022 Stefan Lautenbacher et al
    corecore