286 research outputs found

    Embracing and exploiting annotator emotional subjectivity: an affective rater ensemble model

    Get PDF
    Automated recognition of continuous emotions in audio-visual data is a growing area of study that aids in understanding human-machine interaction. Training such systems presupposes human annotation of the data. The annotation process, however, is laborious and expensive given that several human ratings are required for every data sample to compensate for the subjectivity of emotion perception. As a consequence, labelled data for emotion recognition are rare and the existing corpora are limited when compared to other state-of-the-art deep learning datasets. In this study, we explore different ways in which existing emotion annotations can be utilised more effectively to exploit available labelled information to the fullest. To reach this objective, we exploit individual raters’ opinions by employing an ensemble of rater-specific models, one for each annotator, by that reducing the loss of information which is a byproduct of annotation aggregation; we find that individual models can indeed infer subjective opinions. Furthermore, we explore the fusion of such ensemble predictions using different fusion techniques. Our ensemble model with only two annotators outperforms the regular Arousal baseline on the test set of the MuSe-CaR corpus. While no considerable improvements on valence could be obtained, using all annotators increases the prediction performance of arousal by up to. 07 Concordance Correlation Coefficient absolute improvement on test - solely trained on rate-specific models and fused by an attention-enhanced Long-short Term Memory-Recurrent Neural Network

    An EEG-Based Multi-Modal Emotion Database With Both Posed And Authentic Facial Actions For Emotion Analysis

    Get PDF
    Emotion is an experience associated with a particular pattern of physiological activity along with different physiological, behavioral and cognitive changes. One behavioral change is facial expression, which has been studied extensively over the past few decades. Facial behavior varies with a person\u27s emotion according to differences in terms of culture, personality, age, context, and environment. In recent years, physiological activities have been used to study emotional responses. A typical signal is the electroencephalogram (EEG), which measures brain activity. Most of existing EEG-based emotion analysis has overlooked the role of facial expression changes. There exits little research on the relationship between facial behavior and brain signals due to the lack of dataset measuring both EEG and facial action signals simultaneously. To address this problem, we propose to develop a new database by collecting facial expressions, action units, and EEGs simultaneously. We recorded the EEGs and face videos of both posed facial actions and spontaneous expressions from 29 participants with different ages, genders, ethnic backgrounds. Differing from existing approaches, we designed a protocol to capture the EEG signals by evoking participants\u27 individual action units explicitly. We also investigated the relation between the EEG signals and facial action units. As a baseline, the database has been evaluated through the experiments on both posed and spontaneous emotion recognition with images alone, EEG alone, and EEG fused with images, respectively. The database will be released to the research community to advance the state of the art for automatic emotion recognition

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments

    Optimal accuracy performance in music-based EEG signal using Matthew correlation coefficient advanced (MCCA)

    Get PDF
    The connection between music and human are very synonyms because music could reduce stress. The state of stress could be measured using EEG signal, an electroencephalogram (EEG) measurement which contains an arousal and valence index value. In previous studies, it is found that the Matthew Correlation Coefficient (MCC) performance accuracy is of 85±5%. The arousal indicates strong emotion, and valence indicates positive and negative degree of emotion. Arousal and valence values could be used to measure the accuracy performance. This research focuses on the enhance MCC parameter equation based on arousal and valence values to perform the maximum accuracy percentage in the frequency domain and time-frequency domain analysis. Twenty-one features were used to improve the significance of feature extraction results and the investigated arousal and valence value. The substantial feature extraction involved alpha, beta, delta and theta frequency bands in measuring the arousal and valence index formula. Based on the results, the arousal and valance index is accepted to be applied as parameters in the MCC equations. However, in certain cases, the improvement of the MCC parameter is required to achieve a high accuracy percentage and this research proposed Matthew correlation coefficient advanced (MCCA) in order to improve the performance result by using a six sigma method. In conclusion, the MCCA equation is established to enhance the existing MCC parameter to improve the accuracy percentage up to 99.9% for the arousal and valence index

    Toward Emotion Recognition From Physiological Signals in the Wild: Approaching the Methodological Issues in Real-Life Data Collection

    Get PDF
    Emotion, mood, and stress recognition (EMSR) has been studied in laboratory settings for decades. In particular, physiological signals are widely used to detect and classify affective states in lab conditions. However, physiological reactions to emotional stimuli have been found to differ in laboratory and natural settings. Thanks to recent technological progress (e.g., in wearables) the creation of EMSR systems for a large number of consumers during their everyday activities is increasingly possible. Therefore, datasets created in the wild are needed to insure the validity and the exploitability of EMSR models for real-life applications. In this paper, we initially present common techniques used in laboratory settings to induce emotions for the purpose of physiological dataset creation. Next, advantages and challenges of data collection in the wild are discussed. To assess the applicability of existing datasets to real-life applications, we propose a set of categories to guide and compare at a glance different methodologies used by researchers to collect such data. For this purpose, we also introduce a visual tool called Graphical Assessment of Real-life Application-Focused Emotional Dataset (GARAFED). In the last part of the paper, we apply the proposed tool to compare existing physiological datasets for EMSR in the wild and to show possible improvements and future directions of research. We wish for this paper and GARAFED to be used as guidelines for researchers and developers who aim at collecting affect-related data for real-life EMSR-based applications

    Intelligent Biosignal Analysis Methods

    Get PDF
    This book describes recent efforts in improving intelligent systems for automatic biosignal analysis. It focuses on machine learning and deep learning methods used for classification of different organism states and disorders based on biomedical signals such as EEG, ECG, HRV, and others

    An enhanced stress indices in signal processing based on advanced mmatthew correlation coefficient (MCCA) and multimodal function using EEG signal

    Get PDF
    Stress is a response to various environmental, psychological, and social factors, resulting in strain and pressure on individuals. Categorizing stress levels is a common practise, often using low, medium, and high stress categories. However, the limitation of only three stress levels is a significant drawback of the existing approach. This study aims to address this limitation and proposes an improved method for EEG feature extraction and stress level categorization. The main contribution of this work lies in the enhanced stress level categorization, which expands from three to six levels using the newly established fractional scale based on the quantities' scale influenced by MCCA and multimodal equation performance. The concept of standard deviation (STD) helps in categorizing stress levels by dividing the scale of quantities, leading to an improvement in the process. The lack of performance in the Matthew Correlation Coefficient (MCC) equation is observed in relation to accuracy values. Also, multimodal is rarely discussed in terms of parameters. Therefore, the MCCA and multimodal function provide the advantage of significantly enhancing accuracy as a part of the study's contribution. This study introduces the concept of an Advanced Matthew Correlation Coefficient (MCCA) and applies the six-sigma framework to enhance accuracy in stress level categorization. The research focuses on expanding the stress levels from three to six, utilizing a new scale of fractional stress levels influenced by MCCA and multimodal equation performance. Furthermore, the study applies signal pre-processing techniques to filter and segregate the EEG signal into Delta, Theta, Alpha, and Beta frequency bands. Subsequently, feature extraction is conducted, resulting in twenty-one statistical and non-statistical features. These features are employed in both the MCCA and multimodal function analysis. The study employs the Support Vector Machine (SVM), Random Forest (RF), and k-Nearest Neighbour (k-NN) classifiers for stress level validation. After conducting experiments and performance evaluations, RF demonstrates the highest average accuracy of 85%–10% in 10-Fold and K-Fold techniques, outperforming SVM and k-NN. In conclusion, this study presents an improved approach to stress level categorization and EEG feature extraction. The proposed Advanced Matthew Correlation Coefficient (MCCA) and six-sigma framework contribute to achieving higher accuracy, surpassing the limitations of the existing three-level categorization. The results indicate the superiority of the Random Forest classifier over SVM and k-NN. This research has implications for various applications and fields, providing a more effective equation to accurately categorize stress levels with a potential accuracy exceeding 95%

    Affective and Implicit Tagging using Facial Expressions and Electroencephalography.

    Get PDF
    PhDRecent years have seen an explosion of user-generated, untagged multimedia data, generating a need for efficient search and retrieval of this data. The predominant method for content-based tagging is through manual annotation. Consequently, automatic tagging is currently the subject of intensive research. However, it is clear that the process will not be fully automated in the foreseeable future. We propose to involve the user and investigate methods for implicit tagging, wherein users' responses to the multimedia content are analysed in order to generate descriptive tags. We approach this problem through the modalities of facial expressions and EEG signals. We investigate tag validation and affective tagging using EEG signals. The former relies on the detection of event-related potentials triggered in response to the presentation of invalid tags alongside multimedia material. We demonstrate significant differences in users' EEG responses for valid versus invalid tags, and present results towards single-trial classification. For affective tagging, we propose methodologies to map EEG signals onto the valence-arousal space and perform both binary classification as well as regression into this space. We apply these methods in a real-time affective recommendation system. We also investigate the analysis of facial expressions for implicit tagging. This relies on a dynamic texture representation using non-rigid registration that we first evaluate on the problem of facial action unit recognition. We present results on well-known datasets (with both posed and spontaneous expressions) comparable to the state of the art in the field. Finally, we present a multi-modal approach that fuses both modalities for affective tagging. We perform classification in the valence-arousal space based on these modalities and present results for both feature-level and decision-level fusion. We demonstrate improvement in the results when using both modalities, suggesting the modalities contain complementary information
    • …
    corecore