1,650 research outputs found
Improving Emotion Recognition Systems by Exploiting the Spatial Information of EEG Sensors
Electroencephalography (EEG)-based emotion recognition is gaining increasing importance due to its potential applications in various scientific fields, ranging from psychophysiology to neuromarketing. A number of approaches have been proposed that use machine learning (ML) technology to achieve high recognition performance, which relies on engineering features from brain activity dynamics. Since ML performance can be improved by utilizing 2D feature representation that exploits the spatial relationships among the features, here we propose a novel input representation that involves re-arranging EEG features as an image that reflects the top view of the subject’s scalp. This approach enables emotion recognition through image-based ML methods such as pre-trained deep neural networks or "trained-from-scratch" convolutional neural networks. We have employed both of these techniques in our study to demonstrate the effectiveness of our proposed input representation. We also compare the recognition performance of these methods against state-of-the-art tabular data analysis approaches, which do not utilize the spatial relationships between the sensors. We test our proposed approach using two publicly available benchmark datasets for EEG-based emotion recognition tasks, namely DEAP and MAHNOB-HCI. Our results show that the "trained-from-scratch" convolutional neural network outperforms the best approaches in the literature, achieving 97.8% and 98.3% accuracy in valence and arousal classification on MAHNOB-HCI, and 91% and 90.4% on DEAP, respectively
Cross-Subject Emotion Recognition with Sparsely-Labeled Peripheral Physiological Data Using SHAP-Explained Tree Ensembles
There are still many challenges of emotion recognition using physiological
data despite the substantial progress made recently. In this paper, we
attempted to address two major challenges. First, in order to deal with the
sparsely-labeled physiological data, we first decomposed the raw physiological
data using signal spectrum analysis, based on which we extracted both
complexity and energy features. Such a procedure helped reduce noise and
improve feature extraction effectiveness. Second, in order to improve the
explainability of the machine learning models in emotion recognition with
physiological data, we proposed Light Gradient Boosting Machine (LightGBM) and
SHapley Additive exPlanations (SHAP) for emotion prediction and model
explanation, respectively. The LightGBM model outperformed the eXtreme Gradient
Boosting (XGBoost) model on the public Database for Emotion Analysis using
Physiological signals (DEAP) with f1-scores of 0.814, 0.823, and 0.860 for
binary classification of valence, arousal, and liking, respectively, with
cross-subject validation using eight peripheral physiological signals.
Furthermore, the SHAP model was able to identify the most important features in
emotion recognition, and revealed the relationships between the predictor
variables and the response variables in terms of their main effects and
interaction effects. Therefore, the results of the proposed model not only had
good performance using peripheral physiological data, but also gave more
insights into the underlying mechanisms in recognizing emotions
Fine-Grained Emotion Recognition Using Brain-Heart Interplay Measurements and eXplainable Convolutional Neural Networks
Emotion recognition from electro-physiological signals is an important research topic in multiple scientific domains. While a multimodal input may lead to additional information that increases emotion recognition performance, an optimal processing pipeline for such a vectorial input is yet undefined. Moreover, the algorithm performance often compromises between the ability to generalize over an emotional dimension and the explainability associated with its recognition accuracy. This study proposes a novel explainable artificial intelligence architecture for a 9-level valence recognition from electroencephalographic (EEG) and electrocardiographic (ECG) signals. Synchronous EEG-ECG information are combined to derive vectorial brain-heart interplay features, which are rearranged in a sparse matrix (image) and then classified through an explainable convolutional neural network. The proposed architecture is tested on the publicly available MAHNOB dataset also against the use of vectorial EEG input. Results, also expressed in terms of confusion matrices, outperform the current state of the art, especially in terms of recognition accuracy. In conclusion, we demonstrate the effectiveness of the proposed approach embedding multimodal brain-heart dynamics in an explainable fashion
Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends
Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9
International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications
Computational Approaches to Explainable Artificial Intelligence:Advances in Theory, Applications and Trends
Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications
Networking Architecture and Key Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey
Digital twin (DT), refers to a promising technique to digitally and
accurately represent actual physical entities. One typical advantage of DT is
that it can be used to not only virtually replicate a system's detailed
operations but also analyze the current condition, predict future behaviour,
and refine the control optimization. Although DT has been widely implemented in
various fields, such as smart manufacturing and transportation, its
conventional paradigm is limited to embody non-living entities, e.g., robots
and vehicles. When adopted in human-centric systems, a novel concept, called
human digital twin (HDT) has thus been proposed. Particularly, HDT allows in
silico representation of individual human body with the ability to dynamically
reflect molecular status, physiological status, emotional and psychological
status, as well as lifestyle evolutions. These prompt the expected application
of HDT in personalized healthcare (PH), which can facilitate remote monitoring,
diagnosis, prescription, surgery and rehabilitation. However, despite the large
potential, HDT faces substantial research challenges in different aspects, and
becomes an increasingly popular topic recently. In this survey, with a specific
focus on the networking architecture and key technologies for HDT in PH
applications, we first discuss the differences between HDT and conventional
DTs, followed by the universal framework and essential functions of HDT. We
then analyze its design requirements and challenges in PH applications. After
that, we provide an overview of the networking architecture of HDT, including
data acquisition layer, data communication layer, computation layer, data
management layer and data analysis and decision making layer. Besides reviewing
the key technologies for implementing such networking architecture in detail,
we conclude this survey by presenting future research directions of HDT
Personalized multi-task attention for multimodal mental health detection and explanation
The unprecedented spread of smartphone usage and its various boarding sensors have been garnering increasing interest in automatic mental health detection. However, there are two major barriers to reliable mental health detection applications that can be adopted in real-life: (a)The outputs of the complex machine learning model are not explainable, which reduces the trust of users and thus hinders the application in real-life scenarios. (b)The sensor signal distribution discrepancy across individuals is a major barrier to accurate detection since each individual has their own characteristics. We propose an explainable mental health detection model. Spatial and temporal features of multiple sensory sequences are extracted and fused with different weights generated by the attention mechanism so that the discrepancy of contribution to classifiers across different modalities can be considered in the model. Through a series of experiments on real-life datasets, results show the effectiveness of our model compared to the existing approaches.This research is supported by the National Natural Science Foundation of China (No. 62077027), the Ministry of Science and Technology of the People's Republic of China(No. 2018YFC2002500), the Jilin Province Development and Reform Commission, China (No. 2019C053-1), the Education Department of Jilin Province, China (No. JJKH20200993K), the Department of Science and Technology of Jilin Province, China (No. 20200801002GH), and the European Union's Horizon 2020 FET Proactive project "WeNet-The Internet of us"(No. 823783)
Approaches, applications, and challenges in physiological emotion recognition — a tutorial overview
An automatic emotion recognition system can serve as a fundamental framework for various applications in daily life from monitoring emotional well-being to improving the quality of life through better emotion regulation. Understanding the process of emotion manifestation becomes crucial for building emotion recognition systems. An emotional experience results in changes not only in interpersonal behavior but also in physiological responses. Physiological signals are one of the most reliable means for recognizing emotions since individuals cannot consciously manipulate them for a long duration. These signals can be captured by medical-grade wearable devices, as well as commercial smart watches and smart bands. With the shift in research direction from laboratory to unrestricted daily life, commercial devices have been employed ubiquitously. However, this shift has introduced several challenges, such as low data quality, dependency on subjective self-reports, unlimited movement-related changes, and artifacts in physiological signals. This tutorial provides an overview of practical aspects of emotion recognition, such as experiment design, properties of different physiological modalities, existing datasets, suitable machine learning algorithms for physiological data, and several applications. It aims to provide the necessary psychological and physiological backgrounds through various emotion theories and the physiological manifestation of emotions, thereby laying a foundation for emotion recognition. Finally, the tutorial discusses open research directions and possible solutions
First impressions: A survey on vision-based apparent personality trait analysis
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft
- …