518 research outputs found

    Face Emotion Recognition Based on Machine Learning: A Review

    Get PDF
    Computers can now detect, understand, and evaluate emotions thanks to recent developments in machine learning and information fusion. Researchers across various sectors are increasingly intrigued by emotion identification, utilizing facial expressions, words, body language, and posture as means of discerning an individual's emotions. Nevertheless, the effectiveness of the first three methods may be limited, as individuals can consciously or unconsciously suppress their true feelings. This article explores various feature extraction techniques, encompassing the development of machine learning classifiers like k-nearest neighbour, naive Bayesian, support vector machine, and random forest, in accordance with the established standard for emotion recognition. The paper has three primary objectives: firstly, to offer a comprehensive overview of effective computing by outlining essential theoretical concepts; secondly, to describe in detail the state-of-the-art in emotion recognition at the moment; and thirdly, to highlight important findings and conclusions from the literature, with an emphasis on important obstacles and possible future paths, especially in the creation of state-of-the-art machine learning algorithms for the identification of emotions

    Autonomous Assessment of Videogame Difficulty Using Physiological Signals

    Get PDF
    Given the well-explored relation between challenge and involvement in a task, (e.g., as described in Csikszentmihalyi’s theory of flow), it could be argued that the presence of challenge in videogames is a core element that shapes player experiences and should, therefore, be matched to the player’s skills and attitude towards the game. However, handling videogame difficulty, is a challenging problem in game design, as too easy a task can lead to boredom and too hard can lead to frustration. Thus, by exploring the relationship between difficulty and emotion, the current work intends to propose an artificial intelligence model that autonomously predicts difficulty according to the set of emotions elicited in the player. To test the validity of this approach, we developed a simple puzzle-based Virtual Reality (VR) videogame, based on the Trail Making Test (TMT), and whose objective was to elicit different emotions according to three levels of difficulty. A study was carried out in which physiological responses as well as player self- reports were collected during gameplay. Statistical analysis of the self-reports showed that different levels of experience with either VR or videogames didn’t have a measurable impact on how players performed during the three levels. Additionally, the self-assessed emotional ratings indicated that playing the game at different difficulty levels gave rise to different emotional states. Next, classification using a Support Vector Machine (SVM) was performed to verify if it was possible to detect difficulty considering the physiological responses associated with the elicited emotions. Results report an overall F1-score of 68% in detecting the three levels of difficulty, which verifies the effectiveness of the adopted methodology and encourages further research with a larger dataset.Dada a relação bem explorada entre desafio e envolvimento numa tarefa (p. ex., con- forme descrito na teoria do fluxo de Csikszentmihalyi), pode-se argumentar que a pre- sença de desafio em videojogos é um elemento central que molda a experiência do jogador e deve, portanto, ser compatível com as habilidades e a atitude que jogador exibe perante o jogo. No entanto, saber como lidar com a dificuldade de um videojogo é um problema desafiante no design de jogos, pois uma tarefa muito fácil pode gerar tédio e muito di- fícil pode levar à frustração. Assim, ao explorar a relação entre dificuldade e emoção, o presente trabalho pretende propor um modelo de inteligência artificial que preveja de forma autônoma a dificuldade de acordo com o conjunto de emoções elicitadas no jogador. Para testar a validade desta abordagem, desenvolveu-se um jogo de puzzle em Realidade Virtual (RV), baseado no Trail Making Test (TMT), e cujo objetivo era elicitar diferentes emoções tendo em conta três níveis de dificuldade. Foi realizado um estudo no qual se recolheram as respostas fisiológicas, juntamente com os autorrelatos dos jogado- res, durante o jogo. A análise estatística dos autorelatos mostrou que diferentes níveis de experiência com RV ou videojogos não tiveram um impacto mensurável no desempenho dos jogadores durante os três níveis. Além disso, as respostas emocionais auto-avaliadas indicaram que jogar o jogo em diferentes níveis de dificuldade deu origem a diferentes estados emocionais. Em seguida, foi realizada a classificação por intermédio de uma Má- quina de Vetores de Suporte (SVM) para verificar se era possível detectar dificuldade, considerando as respostas fisiológicas associadas às emoções elicitadas. Os resultados re- latam um F1-score geral de 68% na detecção dos três níveis de dificuldade, o que verifica a eficácia da metodologia adotada e incentiva novas pesquisas com um conjunto de dados maior

    ECG-based Human Emotion Recognition Using Generative Models

    Get PDF
    Human emotion recognition (HER) is ever-evolving and has become an important research field. In autonomous driving, HER can be vital in developing autonomous vehicles. Introducing au- tonomous vehicles is expected to increase safety, having the potential to prevent accidents. Recognizing the passengers’ emotional reactions while driving can help machine learning al- gorithms learn human behavior in traffic. In this thesis, the focus has been on HER using electrocardiogram (ECG) data. The effect of Autoencoders and Sparse Autoencoders in HER using ECG data has been explored and compared to the state-of-the-art. Additionally, the extent of ECG data as a single modality for HER has been discussed. Three pipelines were con- structed to explore how Autoencoders and Sparse Autoencoders affect HER. All pipelines were denoised and resampled using the Pan-Tompkins algorithm. Additionally, the pipelines were all trained, validated, and tested using the Support Vector Classifier (SVC). The first pipeline uses the Pan-Tompkins processed signals as input to the SVC. In the second pipeline, the input to the SVC is features extracted from the signals using an Autoencoder. The last pipeline uses the latent space of a Sparse Autoencoder as input to the SVC. The target emotions for the classifi- cation task were based on the two-dimensional emotion model of valence and arousal, resulting in four classes. The pipeline including an Autoencoder for feature extraction outperformed the pipeline without feature extraction in addition to reducing the bias the models showed towards one class. Using a Sparse Autoencoder, the overall results were lower, but it was able to reduce the bias toward one class further. These results show that the Autoencoder has potential in ECG-based HER and could contribute to the field

    Ubiquitous Technologies for Emotion Recognition

    Get PDF
    Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions

    Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring

    Full text link
    Artificially intelligent perception is increasingly present in the lives of every one of us. Vehicles are no exception, (...) In the near future, pattern recognition will have an even stronger role in vehicles, as self-driving cars will require automated ways to understand what is happening around (and within) them and act accordingly. (...) This doctoral work focused on advancing in-vehicle sensing through the research of novel computer vision and pattern recognition methodologies for both biometrics and wellbeing monitoring. The main focus has been on electrocardiogram (ECG) biometrics, a trait well-known for its potential for seamless driver monitoring. Major efforts were devoted to achieving improved performance in identification and identity verification in off-the-person scenarios, well-known for increased noise and variability. Here, end-to-end deep learning ECG biometric solutions were proposed and important topics were addressed such as cross-database and long-term performance, waveform relevance through explainability, and interlead conversion. Face biometrics, a natural complement to the ECG in seamless unconstrained scenarios, was also studied in this work. The open challenges of masked face recognition and interpretability in biometrics were tackled in an effort to evolve towards algorithms that are more transparent, trustworthy, and robust to significant occlusions. Within the topic of wellbeing monitoring, improved solutions to multimodal emotion recognition in groups of people and activity/violence recognition in in-vehicle scenarios were proposed. At last, we also proposed a novel way to learn template security within end-to-end models, dismissing additional separate encryption processes, and a self-supervised learning approach tailored to sequential data, in order to ensure data security and optimal performance. (...)Comment: Doctoral thesis presented and approved on the 21st of December 2022 to the University of Port

    CNN and LSTM-Based Emotion Charting Using Physiological Signals

    Get PDF
    Novel trends in affective computing are based on reliable sources of physiological signals such as Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR). The use of these signals provides challenges of performance improvement within a broader set of emotion classes in a less constrained real-world environment. To overcome these challenges, we propose a computational framework of 2D Convolutional Neural Network (CNN) architecture for the arrangement of 14 channels of EEG, and a combination of Long Short-Term Memory (LSTM) and 1D-CNN architecture for ECG and GSR. Our approach is subject-independent and incorporates two publicly available datasets of DREAMER and AMIGOS with low-cost, wearable sensors to extract physiological signals suitable for real-world environments. The results outperform state-of-the-art approaches for classification into four classes, namely High Valence—High Arousal, High Valence—Low Arousal, Low Valence—High Arousal, and Low Valence—Low Arousal. Emotion elicitation average accuracy of 98.73% is achieved with ECG right-channel modality, 76.65% with EEG modality, and 63.67% with GSR modality for AMIGOS. The overall highest accuracy of 99.0% for the AMIGOS dataset and 90.8% for the DREAMER dataset is achieved with multi-modal fusion. A strong correlation between spectral-and hidden-layer feature analysis with classification performance suggests the efficacy of the proposed method for significant feature extraction and higher emotion elicitation performance to a broader context for less constrained environments.Peer reviewe
    corecore