10,156 research outputs found

    Robust Modeling of Epistemic Mental States

    Full text link
    This work identifies and advances some research challenges in the analysis of facial features and their temporal dynamics with epistemic mental states in dyadic conversations. Epistemic states are: Agreement, Concentration, Thoughtful, Certain, and Interest. In this paper, we perform a number of statistical analyses and simulations to identify the relationship between facial features and epistemic states. Non-linear relations are found to be more prevalent, while temporal features derived from original facial features have demonstrated a strong correlation with intensity changes. Then, we propose a novel prediction framework that takes facial features and their nonlinear relation scores as input and predict different epistemic states in videos. The prediction of epistemic states is boosted when the classification of emotion changing regions such as rising, falling, or steady-state are incorporated with the temporal features. The proposed predictive models can predict the epistemic states with significantly improved accuracy: correlation coefficient (CoERR) for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for Certain 0.854, and for Interest 0.913.Comment: Accepted for Publication in Multimedia Tools and Application, Special Issue: Socio-Affective Technologie

    A new multi-modal dataset for human affect analysis

    Get PDF
    In this paper we present a new multi-modal dataset of spontaneous three way human interactions. Participants were recorded in an unconstrained environment at various locations during a sequence of debates in a video conference, Skype style arrangement. An additional depth modality was introduced, which permitted the capture of 3D information in addition to the video and audio signals. The dataset consists of 16 participants and is subdivided into 6 unique sections. The dataset was manually annotated on a continuously scale across 5 different affective dimensions including arousal, valence, agreement, content and interest. The annotation was performed by three human annotators with the ensemble average calculated for use in the dataset. The corpus enables the analysis of human affect during conversations in a real life scenario. We first briefly reviewed the existing affect dataset and the methodologies related to affect dataset construction, then we detailed how our unique dataset was constructed

    Macro-and Micro-Expressions Facial Datasets: A Survey

    Get PDF
    Automatic facial expression recognition is essential for many potential applications. Thus, having a clear overview on existing datasets that have been investigated within the framework of face expression recognition is of paramount importance in designing and evaluating effective solutions, notably for neural networks-based training. In this survey, we provide a review of more than eighty facial expression datasets, while taking into account both macro-and micro-expressions. The proposed study is mostly focused on spontaneous and in-the-wild datasets, given the common trend in the research is that of considering contexts where expressions are shown in a spontaneous way and in a real context. We have also provided instances of potential applications of the investigated datasets, while putting into evidence their pros and cons. The proposed survey can help researchers to have a better understanding of the characteristics of the existing datasets, thus facilitating the choice of the data that best suits the particular context of their application

    The Role of Corpus Callosum Development in Functional Connectivity and Cognitive Processing

    Get PDF
    The corpus callosum is hypothesized to play a fundamental role in integrating information and mediating complex behaviors. Here, we demonstrate that lack of normal callosal development can lead to deficits in functional connectivity that are related to impairments in specific cognitive domains. We examined resting-state functional connectivity in individuals with agenesis of the corpus callosum (AgCC) and matched controls using magnetoencephalographic imaging (MEG-I) of coherence in the alpha (8–12 Hz), beta (12–30 Hz) and gamma (30–55 Hz) bands. Global connectivity (GC) was defined as synchronization between a region and the rest of the brain. In AgCC individuals, alpha band GC was significantly reduced in the dorsolateral pre-frontal (DLPFC), posterior parietal (PPC) and parieto-occipital cortices (PO). No significant differences in GC were seen in either the beta or gamma bands. We also explored the hypothesis that, in AgCC, this regional reduction in functional connectivity is explained primarily by a specific reduction in interhemispheric connectivity. However, our data suggest that reduced connectivity in these regions is driven by faulty coupling in both inter- and intrahemispheric connectivity. We also assessed whether the degree of connectivity correlated with behavioral performance, focusing on cognitive measures known to be impaired in AgCC individuals. Neuropsychological measures of verbal processing speed were significantly correlated with resting-state functional connectivity of the left medial and superior temporal lobe in AgCC participants. Connectivity of DLPFC correlated strongly with performance on the Tower of London in the AgCC cohort. These findings indicate that the abnormal callosal development produces salient but selective (alpha band only) resting-state functional connectivity disruptions that correlate with cognitive impairment. Understanding the relationship between impoverished functional connectivity and cognition is a key step in identifying the neural mechanisms of language and executive dysfunction in common neurodevelopmental and psychiatric disorders where disruptions of callosal development are consistently identified

    Positive/Negative Emotion Detection from RGB-D upper Body Images

    Get PDF
    International audienceThe ability to identify users'mental states represents a valu-able asset for improving human-computer interaction. Considering that spontaneous emotions are conveyed mostly through facial expressions and the upper Body movements, we propose to use these modalities together for the purpose of negative/positive emotion classification. A method that allows the recognition of mental states from videos is pro-posed. Based on a dataset composed with RGB-D movies a set of indic-tors of positive and negative is extracted from 2D (RGB) information. In addition, a geometric framework to model the depth flows and capture human body dynamics from depth data is proposed. Due to temporal changes in pixel and depth intensity which characterize spontaneous emo-tions dataset, the depth features are used to define the relation between changes in upper body movements and the affect. We describe a space of depth and texture information to detect the mood of people using upper body postures and their evolution across time. The experimentation has been performed on Cam3D dataset and has showed promising results

    Automatic detection of a driver’s complex mental states

    Get PDF
    Automatic classification of drivers’ mental states is an important yet relatively unexplored topic. In this paper, we define a taxonomy of a set of complex mental states that are relevant to driving, namely: Happy, Bothered, Concentrated and Confused. We present our video segmentation and annotation methodology of a spontaneous dataset of natural driving videos from 10 different drivers. We also present our real-time annotation tool used for labelling the dataset via an emotion perception experiment and discuss the challenges faced in obtaining the ground truth labels. Finally, we present a methodology for automatic classification of drivers’ mental states. We compare SVM models trained on our dataset with an existing nearest neighbour model pre-trained on posed dataset, using facial Action Units as input features. We demonstrate that our temporal SVM approach yields better results. The dataset’s extracted features and validated emotion labels, together with the annotation tool, will be made available to the research community
    corecore