825 research outputs found

    Automatic emotional state detection using facial expression dynamic in videos

    Get PDF
    In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems

    Dynamic Facial Emotion Recognition Oriented to HCI Applications

    Get PDF
    Producción CientíficaAs part of a multimodal animated interface previously presented in [38], in this paper we describe a method for dynamic recognition of displayed facial emotions on low resolution streaming images. First, we address the detection of Action Units of the Facial Action Coding System upon Active Shape Models and Gabor filters. Normalized outputs of the Action Unit recognition step are then used as inputs for a neural network which is based on real cognitive systems architecture, and consists on a habituation network plus a competitive network. Both the competitive and the habituation layer use differential equations thus taking into account the dynamic information of facial expressions through time. Experimental results carried out on live video sequences and on the Cohn-Kanade face database show that the proposed method provides high recognition hit rates.Junta de Castilla y León (Programa de apoyo a proyectos de investigación-Ref. VA036U14)Junta de Castilla y León (programa de apoyo a proyectos de investigación - Ref. VA013A12-2)Ministerio de Economía, Industria y Competitividad (Grant DPI2014-56500-R

    Embedded Face Detection and Facial Expression Recognition

    Get PDF
    Face Detection has been applied in many fields such as surveillance, human machine interaction, entertainment and health care. Two main reasons for extensive attention on this typical research domain are: 1) a strong need for the face recognition system is obvious due to the widespread use of security, 2) face recognition is more user friendly and faster since it almost requests the users to do nothing. The system is based on ARM Cortex-A8 development board, including transplantation of Linux operating system, the development of drivers, detecting face by using face class Haar feature and Viola-Jones algorithm. In the paper, the face Detection system uses the AdaBoost algorithm to detect human face from the frame captured by the camera. The paper introduces the pros and cons between several popular images processing algorithm. Facial expression recognition system involves face detection and emotion feature interpretation, which consists of offline training and online test part. Active shape model (ASM) for facial feature node detection, optical flow for face tracking, support vector machine (SVM) for classification is applied in this research

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    Predicting Ad Liking and Purchase Intent: Large-Scale Analysis of Facial Responses to Ads

    Get PDF
    Billions of online video ads are viewed every month. We present a large-scale analysis of facial responses to video content measured over the Internet and their relationship to marketing effectiveness. We collected over 12,000 facial responses from 1,223 people to 170 ads from a range of markets and product categories. The facial responses were automatically coded frame-by-frame. Collection and coding of these 3.7 million frames would not have been feasible with traditional research methods. We show that detected expressions are sparse but that aggregate responses reveal rich emotion trajectories. By modeling the relationship between the facial responses and ad effectiveness, we show that ad liking can be predicted accurately (ROC AUC = 0.85) from webcam facial responses. Furthermore, the prediction of a change in purchase intent is possible (ROC AUC = 0.78). Ad liking is shown by eliciting expressions, particularly positive expressions. Driving purchase intent is more complex than just making viewers smile: peak positive responses that are immediately preceded by a brand appearance are more likely to be effective. The results presented here demonstrate a reliable and generalizable system for predicting ad effectiveness automatically from facial responses without a need to elicit self-report responses from the viewers. In addition we can gain insight into the structure of effective ads.MIT Media Lab ConsortiumNEC CorporationMAR

    Facial Emotional Classifier For Natural Interaction

    Get PDF
    The recognition of emotional information is a key step toward giving computers the ability to interact more naturally and intelligently with people. We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of a set of characteristic facial points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles in the eyebrow and mouth shape). The method defines and detects the six basic emotions (plus the neutral one) in terms of this information and has been fine-tuned with a database of more than 1500 images. The system has been integrated in a 3D engine for managing virtual characters, allowing the exploration of new forms of natural interaction
    corecore