5,052 research outputs found

    EquiFACS: the Equine Facial Action Coding System

    Get PDF
    Although previous studies of horses have investigated their facial expressions in specific contexts, e.g. pain, until now there has been no methodology available that documents all the possible facial movements of the horse and provides a way to record all potential facial configurations. This is essential for an objective description of horse facial expressions across a range of contexts that reflect different emotional states. Facial Action Coding Systems (FACS) provide a systematic methodology of identifying and coding facial expressions on the basis of underlying facial musculature and muscle movement. FACS are anatomically based and document all possible facial movements rather than a configuration of movements associated with a particular situation. Consequently, FACS can be applied as a tool for a wide range of research questions. We developed FACS for the domestic horse (Equus caballus) through anatomical investigation of the underlying musculature and subsequent analysis of naturally occurring behaviour captured on high quality video. Discrete facial movements were identified and described in terms of the underlying muscle contractions, in correspondence with previous FACS systems. The reliability of others to be able to learn this system (EquiFACS) and consistently code behavioural sequences was high—and this included people with no previous experience of horses. A wide range of facial movements were identified, including many that are also seen in primates and other domestic animals (dogs and cats). EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices

    Facial Action Coding System and Induced Compassion

    Get PDF
    The present study investigated the differential effects of a brief compassion meditation compared to a brief mindfulness meditation on felt and facially expressed compassion while viewing images of suffering. Participants (N = 82) were randomly assigned to one of two meditation conditions designed to promote compassion and relaxation, or relaxation alone. Participants then filmed themselves as they watched a two-minute compassion-inducing video that depicted suffering from around the world. These participant videos were later coded using three-distinct facial coding schemes: Complex FACS, Simplified FACS, and intuition rating. Finally, participants responded to a battery of self-report items about the level of compassion and sadness they experienced during the stimulus video, their trait emotional expressivity, and demographic questions including prior experience with meditation. Results found no difference in felt or facially expressed compassion between participants who completed the compassion meditation and those who completed the mindfulness meditation. Complex and Simplified FACS coding schemes were highly correlated, and both only weakly associated with intuition ratings. However, all three facial coding schemes failed to be even moderately associated with self-reported compassion. Intuition was the most vulnerable facial coding method to the influence of individual differences in gender and trait emotional expressivity. Finally, the current study found that FACS was unable to measure participant compassion, however, further research should be conducted using FACS in combination with other indicators of compassion

    Exploring the Intersection Between Facial Movement, Physiology, and Emotional Regulation: Developing a Method for Children

    Get PDF
    For children in therapy, emotional regulation is frequently a focus of treatment. Identifying, understanding, and managing emotions are all key tasks in one’s development. Developing these skills at a young age has proven to be beneficial for one’s overall wellbeing. Approaching emotional regulation through a dance/movement therapy lens requires a focus on not only one’s cognitive perception of emotion but one’s physical sensation of emotion. Dr. Paul Ekman’s Facial Action Coding System provides a means of systematically creating the sensation of emotion. An individual can experience a physiological response by creating universally identified facial expressions. This thesis introduces a method for children utilizing Dr. Paul Ekman’s Facial Action Coding System to elicit emotion and explore how one experiences and understands their feelings. The intervention was tested in a session with a child in a mental health center. The case study demonstrates the therapeutic benefits of eliciting emotion and the strengths of approaching emotion from an embodied perspective

    Investigating Spontaneous Facial Action Recognition through AAM Representations of the Face

    Get PDF
    The Facial Action Coding System (FACS) [Ekman et al., 2002] is the leading method for measuring facial movement in behavioral science. FACS has been successfully applied, but not limited to, identifying the differences between simulated and genuine pain, differences betweenwhen people are telling the truth versus lying, and differences between suicidal an

    Facial expression recognition of 3D image using facial action coding system (FACS)

    Get PDF
    Facial expression or mimic is one of the results of muscle motion on the face. In a large Indonesian dictionary, the expression is a disclosure or process of declaring, i.e. showing or expressing intentions, ideas of feelings and so on. Facial expression is affected by the cranial nerve VII or Nervus Facialis. In research conducted Paul Ekman got a standardization of expression in the format of a movement called the Facial Action Coding System (FACS). In his research, Paul Ekman said six basic expressions of happiness, sadness, shock, fear, anger, and disgust. In muscle anatomy, that every moving muscle must be contraction, and in the event of contraction, the muscle will expand or swell. Muscles are divided into three parts of origo and insersio as the tip of muscle and belli as the midpoint of the muscle, so any movement occurs then the muscle part belli will expand or swell. Data retrieval technique that is by recording data in 3D, any contraction occurs then the belli part of the muscle will swell and this data will be processed and compared. From this data processing will be obtained the maximum strength of contraction that will be used as a reference for the magnitude of expression made by the model. In the detection of expression is ecluidience distance by comparing the initial data with movement data. The result of this research is a detection of expression and the amount of expression that occurs. A conclusion of this research, we can reconstruction of facial expression detection using FACS, for the example the happiness expression using AU 6 and AU 12 and in this research AU 6 and AU 12 in area 1 and area 4, and in this area it so higher than the other

    Facial expression recognition of 3D image using facial action coding system (FACS)

    Get PDF
    Facial expression or mimic is one of the results of muscle motion on the face. In a large Indonesian dictionary, the expression is a disclosure or process of declaring, ie showing or expressing intentions, ideas of feelings and so on. Facial expression is affected by the cranial nerve VII or Nervus Facialis. In research conducted Paul Ekman got a standardization of expression in the format of a movement called the Facial Action Coding System (FACS). In his research, Paul Ekman said six basic expressions of happiness, sadness, shock, fear, anger, and disgust. In muscle anatomy, that every moving muscle must be contraction, and in the event of contraction, the muscle will expand or swell. Muscles are divided into three parts of origo and insersio as the tip of muscle and belli as the midpoint of the muscle, so any movement occurs then the muscle part belli will expand or swell. Data retrieval technique that is by recording data in 3D, any contraction occurs then the belli part of the muscle will swell and this data will be processed and compared. From this data processing will be obtained the maximum strength of contraction that will be used as a reference for the magnitude of expression made by the model. In the detection of expression is ecluidience distance by comparing the initial data with movement data. The result of this research is a detection of expression and the amount of expression that occurs. A conclusion of this research, we can reconstruction of facial expression detection using FACS, for the example the happiness expression using AU 6 and AU 12 and in this research AU 6 and AU 12 in area 1 and area 4, and in this area it so higher

    Relative Facial Action Unit Detection

    Full text link
    This paper presents a subject-independent facial action unit (AU) detection method by introducing the concept of relative AU detection, for scenarios where the neutral face is not provided. We propose a new classification objective function which analyzes the temporal neighborhood of the current frame to decide if the expression recently increased, decreased or showed no change. This approach is a significant change from the conventional absolute method which decides about AU classification using the current frame, without an explicit comparison with its neighboring frames. Our proposed method improves robustness to individual differences such as face scale and shape, age-related wrinkles, and transitions among expressions (e.g., lower intensity of expressions). Our experiments on three publicly available datasets (Extended Cohn-Kanade (CK+), Bosphorus, and DISFA databases) show significant improvement of our approach over conventional absolute techniques. Keywords: facial action coding system (FACS); relative facial action unit detection; temporal information;Comment: Accepted at IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs Colorado, USA, 201

    Autonomous facial expression recognition using the facial action coding system

    Get PDF
    >Magister Scientiae - MScThe South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach

    Facial expressions emotional recognition with NAO robot

    Get PDF
    Human-robot interaction research is diverse and covers a wide range of topics. All aspects of human factors and robotics are within the purview of HRI research so far as they provide insight into how to improve our understanding in developing effective tools, protocols, and systems to enhance HRI. For example, a significant research effort is being devoted to designing human-robot interface that makes it easier for the people to interact with robots. HRI is an extremely active research field where new and important work is being published at a fast pace. It is crucial for humanoid robots to understand the emotions of people for efficient human robot interaction. Initially, the robot detects human face by Viola- Jones technique. Later, facial distance measurements are accumulated by geometric based facial distance measurement method. Then facial action coding system is used to detect movements of measured facial points. Finally, measured facial movements are evaluated to get instant emotional properties of human face in this research; it has been specifically applied to NAO humanoid robot
    • 

    corecore