999 research outputs found

    Objective, computerized video-based rating of blepharospasm severity

    Get PDF
    OBJECTIVE: To compare clinical rating scales of blepharospasm severity with involuntary eye closures measured automatically from patient videos with contemporary facial expression software. METHODS: We evaluated video recordings of a standardized clinical examination from 50 patients with blepharospasm in the Dystonia Coalition's Natural History and Biorepository study. Eye closures were measured on a frame-by-frame basis with software known as the Computer Expression Recognition Toolbox (CERT). The proportion of eye closure time was compared with 3 commonly used clinical rating scales: the Burke-Fahn-Marsden Dystonia Rating Scale, Global Dystonia Rating Scale, and Jankovic Rating Scale. RESULTS: CERT was reliably able to find the face, and its eye closure measure was correlated with all of the clinical severity ratings (Spearman ρ = 0.56, 0.52, and 0.56 for the Burke-Fahn-Marsden Dystonia Rating Scale, Global Dystonia Rating Scale, and Jankovic Rating Scale, respectively, all p < 0.0001). CONCLUSIONS: The results demonstrate that CERT has convergent validity with conventional clinical rating scales and can be used with video recordings to measure blepharospasm symptom severity automatically and objectively. Unlike EMG and kinematics, CERT requires only conventional video recordings and can therefore be more easily adopted for use in the clinic

    What Your Face Vlogs About: Expressions of Emotion and Big-Five Traits Impressions in YouTube

    Get PDF
    Social video sites where people share their opinions and feelings are increasing in popularity. The face is known to reveal important aspects of human psychological traits, so the understanding of how facial expressions relate to personal constructs is a relevant problem in social media. We present a study of the connections between automatically extracted facial expressions of emotion and impressions of Big-Five personality traits in YouTube vlogs (i.e., video blogs). We use the Computer Expression Recognition Toolbox (CERT) system to characterize users of conversational vlogs. From CERT temporal signals corresponding to instantaneously recognized facial expression categories, we propose and derive four sets of behavioral cues that characterize face statistics and dynamics in a compact way. The cue sets are first used in a correlation analysis to assess the relevance of each facial expression of emotion with respect to Big-Five impressions obtained from crowd-observers watching vlogs, and also as features for automatic personality impression prediction. Using a dataset of 281 vloggers, the study shows that while multiple facial expression cues have significant correlation with several of the Big-Five traits, they are only able to significantly predict Extraversion impressions with moderate values of R-square

    Discrimination of moderate and acute drowsiness based on spontaneous facial expressions

    Get PDF
    It is important for drowsiness detection systems to identify different levels of drowsiness and respond appropriately at each level. This study explores how to discriminate moderate from acute drowsiness by applying computer vision techniques to the human face. In our previous study, spontaneous facial expressions measured through computer vision techniques were used as an indicator to discriminate alert from acutely drowsy episodes. In this study we are exploring which facial muscle movements are predictive of moderate and acute drowsiness. The effect of temporal dynamics of action units on prediction performances is explored by capturing temporal dynamics using an overcomplete representation of temporal Gabor Filters. In the final system we perform feature selection to build a classifier that can discriminate moderate drowsy from acute drowsy episodes. The system achieves a classification rate of .96 A’ in discriminating moderately drowsy versus acutely drowsy episodes. Moreover the study reveals new information in facial behavior occurring during different stages of drowsiness

    Facial expression analysis and the affect space

    Get PDF
    International audienceIn this paper we present a technique for facial expression analysis and representing the underlyingemotions in the affect space. We develop a purely appearance based approach using Multi!scale Gaussianderivatives and Support Vector Machines. The technique is validated on two different databases. The systemis shown to generalize well and performs better than the baseline method

    LOMo: Latent Ordinal Model for Facial Analysis in Videos

    Full text link
    We study the problem of facial analysis in videos. We propose a novel weakly supervised learning method that models the video event (expression, pain etc.) as a sequence of automatically mined, discriminative sub-events (eg. onset and offset phase for smile, brow lower and cheek raise for pain). The proposed model is inspired by the recent works on Multiple Instance Learning and latent SVM/HCRF- it extends such frameworks to model the ordinal or temporal aspect in the videos, approximately. We obtain consistent improvements over relevant competitive baselines on four challenging and publicly available video based facial analysis datasets for prediction of expression, clinical pain and intent in dyadic conversations. In combination with complimentary features, we report state-of-the-art results on these datasets.Comment: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR

    Facial Expression Analysis and The PAD Space

    No full text
    International audienceIn this paper we present a technique for facial expression analysis and representing the underlying emotions in the PAD (Pleasure-Arousal-Dominance) space. We develop a purely appearance based approach using Multi-scale Gaussian derivatives and Support Vector Machines. The system can generalize well and is shown to outperform the baseline method

    Predicting video-conferencing conversation outcomes based on modeling facial expression synchronization

    Get PDF
    Effective video-conferencing conversations are heavily influenced by each speaker's facial expression. In this study, we propose a novel probabilistic model to represent interactional synchrony of conversation partners' facial expressions in video-conferencing communication. In particular, we use a hidden Markov model (HMM) to capture temporal properties of each speaker's facial expression sequence. Based on the assumption of mutual influence between conversation partners, we couple their HMMs as two interacting processes. Furthermore, we summarize the multiple coupled HMMs with a stochastic process prior to discover a set of facial synchronization templates shared among the multiple conversation pairs. We validate the model, by utilizing the exhibition of these facial synchronization templates to predict the outcomes of video-conferencing conversations. The dataset includes 75 video-conferencing conversations from 150 Amazon Mechanical Turkers in the context of a new recruit negotiation. The results show that our proposed model achieves higher accuracy in predicting negotiation winners than support vector machine and canonical HMMs. Further analysis indicates that some synchronized nonverbal templates contribute more in predicting the negotiation outcomes

    Does smile intensity in photographs really predict longevity?:A replication and extension of Abel and Kruger (2010)

    Get PDF
    Abel and Kruger (2010) found that the smile intensity of professional baseball players who were active in 1952, as coded from photographs, predicted these players' longevity. In the current investigation, we sought to replicate this result and to extend the initial analyses. We analyzed (a) a sample that was almost identical to the one from Abel and Kruger's study using the same database and inclusion criteria (N = 224), (b) a considerably larger nonoverlapping sample consisting of other players from the same cohort (N = 527), and (c) all players in the database (N = 13,530 valid cases). Like Abel and Kruger, we relied on categorical smile codings as indicators of positive affectivity, yet we supplemented these codings with subjective ratings of joy intensity and automatic codings of positive affectivity made by computer programs. In both samples and for all three indicators, we found that positive affectivity did not predict mortality once birth year was controlled as a covariate
    corecore