923 research outputs found

    Affective games:a multimodal classification system

    Get PDF
    Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation

    Emotion and Stress Recognition Related Sensors and Machine Learning Technologies

    Get PDF
    This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective

    Feature Space Augmentation: Improving Prediction Accuracy of Classical Problems in Cognitive Science and Computer Vison

    Get PDF
    The prediction accuracy in many classical problems across multiple domains has seen a rise since computational tools such as multi-layer neural nets and complex machine learning algorithms have become widely accessible to the research community. In this research, we take a step back and examine the feature space in two problems from very different domains. We show that novel augmentation to the feature space yields higher performance. Emotion Recognition in Adults from a Control Group: The objective is to quantify the emotional state of an individual at any time using data collected by wearable sensors. We define emotional state as a mixture of amusement, anger, disgust, fear, sadness, anxiety and neutral and their respective levels at any time. The generated model predicts an individual’s dominant state and generates an emotional spectrum, 1x7 vector indicating levels of each emotional state and anxiety. We present an iterative learning framework that alters the feature space uniquely to an individual’s emotion perception, and predicts the emotional state using the individual specific feature space. Hybrid Feature Space for Image Classification: The objective is to improve the accuracy of existing image recognition by leveraging text features from the images. As humans, we perceive objects using colors, dimensions, geometry and any textual information we can gather. Current image recognition algorithms rely exclusively on the first 3 and do not use the textual information. This study develops and tests an approach that trains a classifier on a hybrid text based feature space that has comparable accuracy to the state of the art CNN’s while being significantly inexpensive computationally. Moreover, when combined with CNN’S the approach yields a statistically significant boost in accuracy. Both models are validated using cross validation and holdout validation, and are evaluated against the state of the art

    User independent Emotion Recognition with Residual Signal-Image Network

    Full text link
    User independent emotion recognition with large scale physiological signals is a tough problem. There exist many advanced methods but they are conducted under relatively small datasets with dozens of subjects. Here, we propose Res-SIN, a novel end-to-end framework using Electrodermal Activity(EDA) signal images to classify human emotion. We first apply convex optimization-based EDA (cvxEDA) to decompose signals and mine the static and dynamic emotion changes. Then, we transform decomposed signals to images so that they can be effectively processed by CNN frameworks. The Res-SIN combines individual emotion features and external emotion benchmarks to accelerate convergence. We evaluate our approach on the PMEmo dataset, the currently largest emotional dataset containing music and EDA signals. To the best of author's knowledge, our method is the first attempt to classify large scale subject-independent emotion with 7962 pieces of EDA signals from 457 subjects. Experimental results demonstrate the reliability of our model and the binary classification accuracy of 73.65% and 73.43% on arousal and valence dimension can be used as a baseline

    CorrFeat: Correlation-based feature extraction algorithm using skin conductance and pupil diameter for emotion recognition

    Get PDF
    To recognize emotions using less obtrusive wearable sensors, we present a novel emotion recognition method that uses only pupil diameter (PD) and skin conductance (SC). Psychological studies show that these two signals are related to the attention level of humans exposed to visual stimuli. Based on this, we propose a feature extraction algorithm that extract correlation-based features for participants watching the same video clip. To boost performance given limited data, we implement a learning system without a deep architecture to classify arousal and valence. Our method outperforms not only state-of-art approaches, but also widely-used traditional and deep learning methods

    Physiological signal-based emotion recognition from wearable devices

    Get PDF
    The interest in computers recognizing human emotions has been increasing recently. Many studies have been done about recognizing emotions from physical signals such as facial expressions or from written text with good results. However, recognizing emotions from physiological signals such as heart rate, from wearable devices without physical signals have been challenging. Some studies have given good, or at least promising results. The challenge for emotion recognition is to understand how human body actually reacts to different emotional triggers and to find a common factors among people. The aim of this study is to find out whether it is possible to accurately recognize human emotions and stress from physiological signals using supervised machine learning. Further, we consider the question what type of biosignals are most informative for making such predictions. The performance of Support Vector Machines and Random Forest classifiers are experimentally evaluated on the task of separating stress and no-stress signals from three different biosignals: ECG, PPG and EDA. The challenges with these biosginals from acquiring them to pre-processing the signals are addressed and their connection to emotional experience is discussed. In addition, the challenges and problems on experimental setups used in previous studies are addressed and especially the usability problems of the dataset. The models implemented in this thesis were not able to accurately classify emotions using supervised machine learning from the dataset used. The models did not perform remarkably better than just randomly choosing labels. PPG signal however performed slightly better than ECG or EDA for stress detection

    Supervised learning techniques for stress detection in car drivers

    Get PDF
    6noIn this paper we propose the application of supervised learning techniques to recognize stress situations in drivers by analyzing their Skin Potential Response (SPR) and the Electrocardiogram (ECG). A sensing device is used to acquire the SPR from both hands of the drivers, and the ECG from their chest. We also consider a motion artifact removal algorithm that allows the generation of a single cleaned SPR signal, starting from the two SPR signals, which could be characterized by artifacts due to vibrations or movements of the hands on the wheel. From both the cleaned SPR and the ECG signals we compute some statistical features that are used as input to six Machine Learning Algorithms for stress or non-stress episodes classification. The SPR and ECG signals are also used as input to Deep Learning Algorithms, thus allowing us to compare the performance of the different classifiers. The experiments have been carried out in a firm specialized in developing professional car driving simulators. In particular, a dynamic driving simulator has been used, with subjects driving along a straight road affected by some unanticipated stress-evoking events, located at different positions. We obtain an accuracy of 88.13% in stress recognition using a Long Short-Term Memory (LSTM) network.openopenZontone P.; Affanni A.; Bernardini R.; Del Linz L.; Piras A.; Rinaldo R.Zontone, P.; Affanni, A.; Bernardini, R.; Del Linz, L.; Piras, A.; Rinaldo, R
    • …
    corecore