2,033 research outputs found

    Emotion Recognition With Temporarily Localized 'Emotional Events' in Naturalistic Context

    Full text link
    Emotion recognition using EEG signals is an emerging area of research due to its broad applicability in BCI. Emotional feelings are hard to stimulate in the lab. Emotions do not last long, yet they need enough context to be perceived and felt. However, most EEG-related emotion databases either suffer from emotionally irrelevant details (due to prolonged duration stimulus) or have minimal context doubting the feeling of any emotion using the stimulus. We tried to reduce the impact of this trade-off by designing an experiment in which participants are free to report their emotional feelings simultaneously watching the emotional stimulus. We called these reported emotional feelings "Emotional Events" in our Dataset on Emotion with Naturalistic Stimuli (DENS). We used EEG signals to classify emotional events on different combinations of Valence(V) and Arousal(A) dimensions and compared the results with benchmark datasets of DEAP and SEED. STFT is used for feature extraction and used in the classification model consisting of CNN-LSTM hybrid layers. We achieved significantly higher accuracy with our data compared to DEEP and SEED data. We conclude that having precise information about emotional feelings improves the classification accuracy compared to long-duration EEG signals which might be contaminated by mind-wandering

    Affective Brain-Computer Interfaces

    Get PDF

    Using minimal number of electrodes for emotion detection using noisy EEG data

    Get PDF
    Emotion is an important aspect in the interaction between humans. It is fundamental to human experience and rational decision-making. There is a great interest for detecting emotions automatically. A number of techniques have been employed for this purpose using channels such as voice and facial expressions. However, these channels are not very accurate because they can be affected by users\u27 intentions. Other techniques use physiological signals along with electroencephalography (EEG) for emotion detection. However, these approaches are not very practical for real time applications because they ask the participants to reduce any motion and facial muscle movement, reject EEG data contaminated with artifacts and rely on large number of electrodes. In this thesis, we propose an approach that analyzes highly contaminated EEG data produced from a new emotion elicitation technique. We also use a feature selection mechanism to extract features that are relevant to the emotion detection task based on neuroscience findings. We reached an average accuracy of 51% for joy emotion, 53% for anger, 58% for fear and 61% for sadness. We are also, applying our approach on smaller number of electrodes that ranges from 4 up to 25 electrodes and we reached an average classification accuracy of 33% for joy emotion, 38% for anger, 33% for fear and 37.5% for sadness using 4 or 6 electrodes only

    Low-cost methodologies and devices applied to measure, model and self-regulate emotions for Human-Computer Interaction

    Get PDF
    En aquesta tesi s'exploren les diferents metodologies d'anàlisi de l'experiència UX des d'una visió centrada en usuari. Aquestes metodologies clàssiques i fonamentades només permeten extreure dades cognitives, és a dir les dades que l'usuari és capaç de comunicar de manera conscient. L'objectiu de la tesi és proposar un model basat en l'extracció de dades biomètriques per complementar amb dades emotives (i formals) la informació cognitiva abans esmentada. Aquesta tesi no és només teòrica, ja que juntament amb el model proposat (i la seva evolució) es mostren les diferents proves, validacions i investigacions en què s'han aplicat, sovint en conjunt amb grups de recerca d'altres àrees amb èxit.En esta tesis se exploran las diferentes metodologías de análisis de la experiencia UX desde una visión centrada en usuario. Estas metodologías clásicas y fundamentadas solamente permiten extraer datos cognitivos, es decir los datos que el usuario es capaz de comunicar de manera consciente. El objetivo de la tesis es proponer un modelo basado en la extracción de datos biométricos para complementar con datos emotivos (y formales) la información cognitiva antes mencionada. Esta tesis no es solamente teórica, ya que junto con el modelo propuesto (y su evolución) se muestran las diferentes pruebas, validaciones e investigaciones en la que se han aplicado, a menudo en conjunto con grupos de investigación de otras áreas con éxito.In this thesis, the different methodologies for analyzing the UX experience are explored from a user-centered perspective. These classical and well-founded methodologies only allow the extraction of cognitive data, that is, the data that the user is capable of consciously communicating. The objective of this thesis is to propose a methodology that uses the extraction of biometric data to complement the aforementioned cognitive information with emotional (and formal) data. This thesis is not only theoretical, since the proposed model (and its evolution) is complemented with the different tests, validations and investigations in which they have been applied, often in conjunction with research groups from other areas with success

    Machine Learning Methods for functional Near Infrared Spectroscopy

    Get PDF
    Identification of user state is of interest in a wide range of disciplines that fall under the umbrella of human machine interaction. Functional Near Infra-Red Spectroscopy (fNIRS) device is a relatively new device that enables inference of brain activity through non-invasively pulsing infra-red light into the brain. The fNIRS device is particularly useful as it has a better spatial resolution than the Electroencephalograph (EEG) device that is most commonly used in Human Computer Interaction studies under ecologically valid settings. But this key advantage of fNIRS device is underutilized in current literature in the fNIRS domain. We propose machine learning methods that capture this spatial nature of the human brain activity using a novel preprocessing method that uses `Region of Interest\u27 based feature extraction. Experiments show that this method outperforms the F1 score achieved previously in classifying `low\u27 vs `high\u27 valence state of a user. We further our analysis by applying a Convolutional Neural Network (CNN) to the fNIRS data, thus preserving the spatial structure of the data and treating the data similar to a series of images to be classified. Going further, we use a combination of CNN and Long Short-Term Memory (LSTM) to capture the spatial and temporal behavior of the fNIRS data, thus treating it similar to a video classification problem. We show that this method improves upon the accuracy previously obtained by valence classification methods using EEG or fNIRS devices. Finally, we apply the above model to a problem in classifying combined task-load and performance in an across-subject, across-task scenario of a Human Machine Teaming environment in order to achieve optimal productivity of the system
    corecore