97 research outputs found

    CorrFeat: Correlation-based feature extraction algorithm using skin conductance and pupil diameter for emotion recognition

    Get PDF
    To recognize emotions using less obtrusive wearable sensors, we present a novel emotion recognition method that uses only pupil diameter (PD) and skin conductance (SC). Psychological studies show that these two signals are related to the attention level of humans exposed to visual stimuli. Based on this, we propose a feature extraction algorithm that extract correlation-based features for participants watching the same video clip. To boost performance given limited data, we implement a learning system without a deep architecture to classify arousal and valence. Our method outperforms not only state-of-art approaches, but also widely-used traditional and deep learning methods

    Psychophysiology-based QoE assessment : a survey

    Get PDF
    We present a survey of psychophysiology-based assessment for quality of experience (QoE) in advanced multimedia technologies. We provide a classification of methods relevant to QoE and describe related psychological processes, experimental design considerations, and signal analysis techniques. We summarize multimodal techniques and discuss several important aspects of psychophysiology-based QoE assessment, including the synergies with psychophysical assessment and the need for standardized experimental design. This survey is not considered to be exhaustive but serves as a guideline for those interested to further explore this emerging field of research

    Affective Recommendation of Movies Based on Selected Connotative Features

    Get PDF
    The apparent difficulty in assessing emotions elicited by movies and the undeniable high variability in subjects emotional responses to filmic content have been recently tackled by exploring film connotative properties: the set of shooting and editing conventions that help in transmitting meaning to the audience. Connotation provides an intermediate representation which exploits the objectivity of audiovisual descriptors to predict the subjective emotional reaction of single users. This is done without the need of registering users physiological signals neither by employing other people highly variable emotional rates, but just relying on the inter-subjectivity of connotative concepts and on the knowledge of users reactions to similar stimuli. This work extends previous by extracting audiovisual and film grammar descriptors and, driven by users rates on connotative properties, creates a shared framework where movie scenes are placed, compared and recommended according to connotation. We evaluate the potential of the proposed system by asking users to assess the ability of connotation in suggesting filmic content able to target their affective requests

    FusionSense: Emotion Classification using Feature Fusion of Multimodal Data and Deep learning in a Brain-inspired Spiking Neural Network

    Get PDF
    Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem.Peer reviewe

    Implicit social cognition in autism spectrum disorder

    Get PDF
    Implicit learning about people’s states of mind relies inherently on associated emotions and affective valences, with abstract concepts such as disposition, attitude and intention being an intrinsic part of what is learned. Yet, similarly to studies aiming at the typically developed population, nearly all implicit learning studies on individuals with Autism Spectrum Disorder (ASD), are limited to the non-social domain, neglecting the possibility of domain-specific implicit learning impairments. Human behaviour is variable and complex and therefore detecting regularities in social interactions may be more challenging than in the physical world, which is largely governed by predictable laws.This project employed a novel implicit learning paradigm to evaluate implicit learning abilities in the social and non-social domains in typically-developed individuals with varying levels of ASD traits and individuals with a clinical diagnosis of ASD.The results revealed that impairments in implicit learning in ASD individuals emerge with respect to implicit social learning, with intact implicit learning abilities in the non-social domain. Deficits in implicit social learning were observed despite the participants’ ability to correctly identify facial expressions, gaze direction and identities of the characters used in the studies. These findings extrapolated to typically-developed individuals high in ASD traits, suggesting a gradient of social implicit learning ability that runs throughout the population.The relative contributions of three potential mechanisms underpinning implicit social learning were examined: (i) contingency learning per se, (ii) contribution of other cognitive processes such as memory for facial expressions and social attention, (iii) implicit affective tagging. The evidence suggests that individuals with ASD may be impaired in their ability to implicitly incorporate affective values into cognitive processing, supporting the implicit affective tagging hypothesis. I argue that ASD individuals use alternative strategies to comprehend others’ minds, relying more on physical characteristics, rather than socio-emotional meaning

    Affective Brain-Computer Interfaces

    Get PDF

    RCEA: Real-time, Continuous Emotion Annotation for collecting precise mobile video ground truth labels

    Get PDF
    Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion annotations (RCEA) only for desktop settings. Following a user-centric approach, we designed an RCEA technique for mobile video watching, and validated its usability and reliability in a controlled, indoor (N=12) and later outdoor (N=20) study. Drawing on physiological measures, interaction logs, and subjective workload reports, we show that (1) RCEA is perceived to be usable for annotating emotions while mobile video watching, without increasing users' mental workload (2) the resulting time-variant annotations are comparable with intended emotion attributes of the video stimuli (classification error for valence: 8.3%; arousal: 25%). We contribute a validated annotation technique and associated annotation fusion method, that is suitable for collecting fine-grained emotion annotations while users watch mobile videos
    • …
    corecore