15 research outputs found

    A dataset of continuous affect annotations and physiological signals for emotion analysis

    Get PDF
    From a computational viewpoint, emotions continue to be intriguingly hard to understand. In research, direct, real-time inspection in realistic settings is not possible. Discrete, indirect, post-hoc recordings are therefore the norm. As a result, proper emotion assessment remains a problematic issue. The Continuously Annotated Signals of Emotion (CASE) dataset provides a solution as it focusses on real-time continuous annotation of emotions, as experienced by the participants, while watching various videos. For this purpose, a novel, intuitive joystick-based annotation interface was developed, that allowed for simultaneous reporting of valence and arousal, that are instead often annotated independently. In parallel, eight high quality, synchronized physiological recordings (1000 Hz, 16-bit ADC) were made of ECG, BVP, EMG (3x), GSR (or EDA), respiration and skin temperature. The dataset consists of the physiological and annotation data from 30 participants, 15 male and 15 female, who watched several validated video-stimuli. The validity of the emotion induction, as exemplified by the annotation and physiological data, is also presented.Comment: Dataset available at: https://rmc.dlr.de/download/CASE_dataset/CASE_dataset.zi

    Comparative Analysis of Electrodermal Activity Decomposition Methods in Emotion Detection Using Machine Learning

    Get PDF
    Electrodermal activity (EDA) reflects sympathetic nervous system activity through sweating-related changes in skin conductance. Decomposition analysis is used to deconvolve the EDA into slow and fast varying tonic and phasic activity, respectively. In this study, we used machine learning models to compare the performance of two EDA decomposition algorithms to detect emotions such as amusing, boring, relaxing, and scary. The EDA data considered in this study were obtained from the publicly available Continuously Annotated Signals of Emotion (CASE) dataset. Initially, we pre-processed and deconvolved the EDA data into tonic and phasic components using decomposition methods such as cvxEDA and BayesianEDA. Further, 12 time-domain features were extracted from the phasic component of EDA data. Finally, we applied machine learning algorithms such as logistic regression (LR) and support vector machine (SVM), to evaluate the performance of the decomposition method. Our results imply that the BayesianEDA decomposition method outperforms the cvxEDA

    Blunted cardiovascular reactivity may serve as an index of psychological task disengagement in the motivated performance situations.

    Get PDF
    Challenge and threat models predict that once individuals become engaged with performance, their evaluations and cardiovascular response determine further outcomes. Although the role of challenge and threat in predicting performance has been extensively tested, few studies have focused on task engagement. We aimed to investigate task engagement in performance at the psychological and physiological levels. We accounted for physiological task engagement by examining blunted cardiovascular reactivity, the third possible cardiovascular response to performance, in addition to the challenge/threat responses. We expected that low psychological task engagement would be related to blunted cardiovascular reactivity during the performance. Gamers (N = 241) completed five matches of the soccer video game FIFA 19. We recorded psychological task engagement, heart rate reactivity, and the difference between goals scored and conceded. Lower psychological task engagement was related to blunted heart rate reactivity during the performance. Furthermore, poorer performance in the previous game was related to increased task engagement in the subsequent match. The findings extend existing literature by providing initial evidence that blunted cardiovascular reactivity may serve as the index of low task engagement

    Beyond mobile apps: a survey of technologies for mental well-being

    Get PDF
    Mental health problems are on the rise globally and strain national health systems worldwide. Mental disorders are closely associated with fear of stigma, structural barriers such as financial burden, and lack of available services and resources which often prohibit the delivery of frequent clinical advice and monitoring. Technologies for mental well-being exhibit a range of attractive properties, which facilitate the delivery of state-of-the-art clinical monitoring. This review article provides an overview of traditional techniques followed by their technological alternatives, sensing devices, behaviour changing tools, and feedback interfaces. The challenges presented by these technologies are then discussed with data collection, privacy, and battery life being some of the key issues which need to be carefully considered for the successful deployment of mental health toolkits. Finally, the opportunities this growing research area presents are discussed including the use of portable tangible interfaces combining sensing and feedback technologies. Capitalising on the data these ubiquitous devices can record, state of the art machine learning algorithms can lead to the development of robust clinical decision support tools towards diagnosis and improvement of mental well-being delivery in real-time

    RCEA: Real-time, Continuous Emotion Annotation for collecting precise mobile video ground truth labels

    Get PDF
    Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion annotations (RCEA) only for desktop settings. Following a user-centric approach, we designed an RCEA technique for mobile video watching, and validated its usability and reliability in a controlled, indoor (N=12) and later outdoor (N=20) study. Drawing on physiological measures, interaction logs, and subjective workload reports, we show that (1) RCEA is perceived to be usable for annotating emotions while mobile video watching, without increasing users' mental workload (2) the resulting time-variant annotations are comparable with intended emotion attributes of the video stimuli (classification error for valence: 8.3%; arousal: 25%). We contribute a validated annotation technique and associated annotation fusion method, that is suitable for collecting fine-grained emotion annotations while users watch mobile videos

    CorrNet: Fine-grained emotion recognition for video watching using wearable physiological sensors

    Get PDF
    Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neu-tral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance

    Una primera aproximación hacia la computación afectiva en entornos de realidad virtual multi-modales e interactivos

    Full text link
    [ES] La computación afectiva es un campo de la informática con muchas aplicaciones por desarrollar y explotar. En este trabajo aplicaremos la computación afectiva a entornos de Realidad Virtual (RV) interactivos para estudiar la respuesta emotiva de los usuarios a distintos estímulos. En primer lugar, se ha creado un dataset propio, recopilando los datos fisiológicos de distintos usuarios tras exponerlos a distintos estímulos con tal de provocarles emociones, junto con sus respuestas a cuestionarios del tipo “Self-Assessment Manikin” para inferir las emociones sentidas. Estos datos fueron recopilados finalmente con un prototipo creado con una placa Arduino y varios sensores conectados y programados. Dicho dataset se ha utilizado posteriormente para para crear un modelo de regresión de las emociones sentidas por cada usuario usando una estructura de redes neuronales LTSM (Long Short-Term Memory), y se ha aplicado para la observación de la respuesta emotiva a distintos estímulos en escenarios de RV interactivos que se han preparado. Entre los estímulos comparados en los escenarios de RV están el uso de audio máquina o el uso de audio humano, el uso de distintos tipos de subtítulos, y el uso de texto o de audio para describir puntos de interés. En cuanto a los resultados, los estímulos seleccionados no han provocado una gran respuesta emotiva por parte del usuario. Por otra parte, el modelo de regresión ha tenido resultados aceptables a la hora de estimar la respuesta emotiva de los usuarios en base a sus métricas fisiológicas. Se espera que este estudio preliminar abra la puerta a una nueva línea de investigación en esta área, materializándose en una Tesis Doctoral. Los resultados obtenidos no han sido conclusivos por falta de medios, como el: reducido número de voluntarios para el estudio, y baja calidad de los sensores utilizados para recopilación de métricas (a falta de acceso a otros mejores), así como por las limitaciones de tiempo.[CA] La computació afectiva és un camp de la informàtica amb moltes aplicacions per desenvolupar i explotar. En aquest treball aplicarem la computació afectiva a entorns de Realitat Virtual (RV) interactius per estudiar la resposta emotiva dels usuaris a distints estímuls. En primer lloc, s'ha creat un dataset propi, recopilant les dades fisiològiques de distints usuaris, després d'exposar-los a distints estímuls per provocar-los emocions, junt amb les seues respostes a qüestionaris del tipus “Self-Assessment Manikin” per inferir les emocions sentides. Aquestes dades van ser recopilades finalment amb un prototip creat amb una placa Arduino i diversos sensors connectats i programats Aquest dataset s'ha utilitzat posteriorment per a crear un model de regressió de les emocions sentides usant una estructura de xarxes neuronals LTSM (Long Short-Term Memory), i s'ha aplicat per a l'observació de la resposta emotiva a cada estímul en els escenaris de RV que s'han preparat. Entre els estímuls comparats en els escenaris de RV estan l'ús d'àudio màquina o l'ús d'àudio humà, l'ús de diferents tipus de subtítols, i l'ús de text o d'àudio per a descriure punts d'interès. Quant als resultats, els estímuls seleccionats no han provocat una gran resposta emotiva per part de l'usuari. D'altra banda el model de regressió ha tingut resultats acceptables a l'hora d'estimar la resposta emotiva dels usuaris sobre la base de les seves mètriques fisiològiques. S'espera que aquest estudi preliminar siga el punt de partida a una nova línia d'investigació en aquesta àrea, materialitzant-se en una Tesi Doctoral. Els resultats obtinguts no han sigut conclusius per falta de mitjans, com un reduït nombre de voluntaris per a l'estudi, i baixa qualitat dels sensors utilitzats per a recopilació de mètrica (a falta d'accés a altres millors), així com per les limitacions temporals.[EN] Affective computing is a field of computing with many applications to develop and exploit. In this work we will apply affective computing to interactive Virtual Reality (VR) environments to study the emotional response of users to different stimuli. First, a dataset has been created, by collecting the physiological data from different users after exposing them to different stimuli to provoke emotions, together with their responses to “Self- Assessment Manikin” questionnaires to infer the emotions felt. This data was finally collected with a prototype created with an Arduino board and several sensors connected and programmed. This dataset has subsequently been used to create a regression model of the emotions felt by each user using an LTSM (Long Short-Term Memory) neural network structure, and it has been applied to observe the emotional response to each stimulus in the VR scenarios that have been prepared and presented to the users. Among the stimuli compared in VR scenarios are the use of computer-generated audio or the use of human audio, the use of different types of subtitles, and the use of text or audio to describe points of interest. Regarding the results, the selected stimuli have not elicited a great emotional response from the user. On the other hand, the regression model has had acceptable results when estimating the emotional response of users based on their physiological metrics. This preliminary study is expected to open the door to a new research line in this field, being further developed in a PhD Thesis. The obtained results have not been conclusive due to lack of means, like reduced number of volunteers for the study and low quality of the sensors used to collect the metrics (in the absence of access to better ones), as well as time constraints.Rus Arance, JAD. (2021). Una primera aproximación hacia la computación afectiva en entornos de realidad virtual multi-modales e interactivos. Universitat Politècnica de València. http://hdl.handle.net/10251/178155TFG

    A dataset of continuous affect annotations and physiological signals for emotion analysis

    No full text
    From a computational viewpoint, emotions continue to be intriguingly hard to understand. In research, a direct and real-time inspection in realistic settings is not possible. Discrete, indirect, post-hoc recordings are therefore the norm. As a result, proper emotion assessment remains a problematic issue. The Continuously Annotated Signals of Emotion (CASE) dataset provides a solution as it focuses on real-time continuous annotation of emotions, as experienced by the participants, while watching various videos. For this purpose, a novel, intuitive joystick-based annotation interface was developed, that allowed for simultaneous reporting of valence and arousal, that are instead often annotated independently. In parallel, eight high quality, synchronized physiological recordings (1000Hz, 16-bit ADC) were obtained from ECG, BVP, EMG (3x), GSR (or EDA), respiration and skin temperature sensors. The dataset consists of the physiological and annotation data from 30 participants, 15 male and 15 female, who watched several validated video-stimuli. The validity of the emotion induction, as exemplified by the annotation and physiological data, is also presented
    corecore