191 research outputs found

    A dynamic texture based approach to recognition of facial actions and their temporal models

    Get PDF
    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set

    Affective and Implicit Tagging using Facial Expressions and Electroencephalography.

    Get PDF
    PhDRecent years have seen an explosion of user-generated, untagged multimedia data, generating a need for efficient search and retrieval of this data. The predominant method for content-based tagging is through manual annotation. Consequently, automatic tagging is currently the subject of intensive research. However, it is clear that the process will not be fully automated in the foreseeable future. We propose to involve the user and investigate methods for implicit tagging, wherein users' responses to the multimedia content are analysed in order to generate descriptive tags. We approach this problem through the modalities of facial expressions and EEG signals. We investigate tag validation and affective tagging using EEG signals. The former relies on the detection of event-related potentials triggered in response to the presentation of invalid tags alongside multimedia material. We demonstrate significant differences in users' EEG responses for valid versus invalid tags, and present results towards single-trial classification. For affective tagging, we propose methodologies to map EEG signals onto the valence-arousal space and perform both binary classification as well as regression into this space. We apply these methods in a real-time affective recommendation system. We also investigate the analysis of facial expressions for implicit tagging. This relies on a dynamic texture representation using non-rigid registration that we first evaluate on the problem of facial action unit recognition. We present results on well-known datasets (with both posed and spontaneous expressions) comparable to the state of the art in the field. Finally, we present a multi-modal approach that fuses both modalities for affective tagging. We perform classification in the valence-arousal space based on these modalities and present results for both feature-level and decision-level fusion. We demonstrate improvement in the results when using both modalities, suggesting the modalities contain complementary information

    The Correlation between EEG Signals as Measured in Different Positions on Scalp Varying with Distance

    Get PDF
    © 2018 The Author(s). Published by Elsevier B. V. This is an open access article under the CC BY-NC-ND license https://creativecommons.org/licenses/by-nc-nd/4.0/Biomedical signals such as electroencephalogram (EEG) are the time varying signal, and different position of electrodes give different time varying signals. There might be a correlation between these signals. It is likely that the correlation is related to the actual position of electrodes. In this paper, we show that correlation is related to the physical distance between electrodes as measured. This finding is independent of participants and brain hemisphere. Our results indicate that the EEG signal is not transmitted via neurons but through white matter in a brain.Peer reviewedFinal Published versio

    Intelligent wristbands for the automatic detection of emotional states for the elderly

    Get PDF
    Over the last few years, research on computational intelligence is being conducted to detect emotional states of people. This paper proposes the use of intelligent wristbands for the automatic detection of emotional states to develop an application which allows to monitor older people in order to improve their quality of life. The paper describes the hardware design and the cognitive module that allows the recognition of the emotional states. The proposed wristband also integrates a camera that improves the emotion detection.- Programa Operacional Temático Factores de Competitividade(POCI-01-0145-). MINECO/FEDER TIN2015-65515-C4- 1-R and the FPI grant AP2013-01276 awarded to Jaime-Andres Rincon. This work is supported by COMPETE: POCI-01-0145-FEDER-007043 and FCT - Fundação para a Ciência e Tecnologia within the projects UID/CEC/00319/2013 and Post-Doc scholarship SFRH/BPD/102696/201

    Exploring EEG Features in Cross-Subject Emotion Recognition

    Get PDF
    Recognizing cross-subject emotions based on brain imaging data, e.g., EEG, has always been difficult due to the poor generalizability of features across subjects. Thus, systematically exploring the ability of different EEG features to identify emotional information across subjects is crucial. Prior related work has explored this question based only on one or two kinds of features, and different findings and conclusions have been presented. In this work, we aim at a more comprehensive investigation on this question with a wider range of feature types, including 18 kinds of linear and non-linear EEG features. The effectiveness of these features was examined on two publicly accessible datasets, namely, the dataset for emotion analysis using physiological signals (DEAP) and the SJTU emotion EEG dataset (SEED). We adopted the support vector machine (SVM) approach and the "leave-one-subject-out" verification strategy to evaluate recognition performance. Using automatic feature selection methods, the highest mean recognition accuracy of 59.06% (AUC = 0.605) on the DEAP dataset and of 83.33% (AUC = 0.904) on the SEED dataset were reached. Furthermore, using manually operated feature selection on the SEED dataset, we explored the importance of different EEG features in cross-subject emotion recognition from multiple perspectives, including different channels, brain regions, rhythms, and feature types. For example, we found that the Hjorth parameter of mobility in the beta rhythm achieved the best mean recognition accuracy compared to the other features. Through a pilot correlation analysis, we further examined the highly correlated features, for a better understanding of the implications hidden in those features that allow for differentiating cross-subject emotions. Various remarkable observations have been made. The results of this paper validate the possibility of exploring robust EEG features in cross-subject emotion recognition

    Extracting Relevance and Affect Information from Physiological Text Annotation

    Get PDF
    We present physiological text annotation, which refers to the practice of associating physiological responses to text content in order to infer characteristics of the user information needs and affective responses. Text annotation is a laborious task, and implicit feedback has been studied as a way to collect annotations without requiring any explicit action from the user. Previous work has explored behavioral signals, such as clicks or dwell time to automatically infer annotations, and physiological signals have mostly been explored for image or video content. We report on two experiments in which physiological text annotation is studied first to 1) indicate perceived relevance and then to 2) indicate affective responses of the users. The first experiment tackles the user’s perception of relevance of an information item, which is fundamental towards revealing the user’s information needs. The second experiment is then aimed at revealing the user’s affective responses towards a -relevant- text document. Results show that physiological user signals are associated with relevance and affect. In particular, electrodermal activity (EDA) was found to be different when users read relevant content than when they read irrelevant content and was found to be lower when reading texts with negative emotional content than when reading texts with neutral content. Together, the experiments show that physiological text annotation can provide valuable implicit inputs for personalized systems. We discuss how our findings help design personalized systems that can annotate digital content using human physiology without the need for any explicit user interaction

    EEG correlates of different emotional states elicited during watching music videos

    Get PDF
    Studying emotions has become increasingly popular in various research fields. Researchers across the globe have studied various tools to implicitly assess emotions and affective states of people. Human computer interface systems specifically can benefit from such implicit emotion evaluator module, which can help them determine their users' affective states and act accordingly. Brain electrical activity can be considered as an appropriate candidate for extracting emotion-related cues, but it is still in its infancy. In this paper, the results of analyzing the Electroencephalogram (EEG) for assessing emotions elicited during watching various pre-selected emotional music video clips have been reported. More precisely, in-depth results of both subject-dependent and subject-independent correlation analysis between time domain, and frequency domain features of EEC signal and subjects' self assessed emotions are produced and discussed
    corecore