211 research outputs found
Naturalistic Affective Expression Classification by a Multi-Stage Approach Based on Hidden Markov Models
In naturalistic behaviour, the affective states of a person
change at a rate much slower than the typical rate at which video or
audio is recorded (e.g. 25fps for video). Hence, there is a high probability
that consecutive recorded instants of expressions represent a same
affective content. In this paper, a multi-stage automatic affective expression
recognition system is proposed which uses Hidden Markov Models
(HMMs) to take into account this temporal relationship and finalize the
classification process. The hidden states of the HMMs are associated
with the levels of affective dimensions to convert the classification problem
into a best path finding problem in HMM. The system was tested on
the audio data of the Audio/Visual Emotion Challenge (AVEC) datasets
showing performance significantly above that of a one-stage classification
system that does not take into account the temporal relationship, as well
as above the baseline set provided by this Challenge. Due to the generality
of the approach, this system could be applied to other types of
affective modalities
Design of a fuzzy affective agent based on typicality degrees of physiological signals
Conference paper presented at International Conference on Information Processing and Management in July 2014Physiology-based emotionally intelligent paradigms provide
an opportunity to enhance human computer interactions by continuously
evoking and adapting to the user experiences in real-time. However , there
are unresolved questions on how to model real- time emotionally intelligent
applications through mapping of physiological patterns to users '
affective states.
In ·this study, we consider an approach for design of fuzzy affective agent
based on the concept of typicality. We propose the use of typicality degrees
of physiological patterns to construct the fuzzy rules representing the continuous transitions of user 's affective states. The approach
was tested· on experimental data in which physiological measures were
recorded on players involved in an action game to characterize various
gaming experiences . We show that , in addition to exploitation of the
results to characterize users ' affective states through .typicality degrees,
this approach is a systematic way to automatically define fuzzy rules
from experimental data for an affective agent to be used in real -time
continuous assessment of user's affective states.Physiology-based emotionally intelligent paradigms provide
an opportunity to enhance human computer interactions by continuously
evoking and adapting to the user experiences in real-time. However , there
are unresolved questions on how to model real- time emotionally intelligent
applications through mapping of physiological patterns to users '
affective states.
In ·this study, we consider an approach for design of fuzzy affective agent
based on the concept of typicality. We propose the use of typicality degrees
of physiological patterns to construct the fuzzy rules representing the continuous transitions of user 's affective states. The approach
was tested· on experimental data in which physiological measures were
recorded on players involved in an action game to characterize various
gaming experiences . We show that , in addition to exploitation of the
results to characterize users ' affective states through .typicality degrees,
this approach is a systematic way to automatically define fuzzy rules
from experimental data for an affective agent to be used in real -time
continuous assessment of user's affective states
Towards emotional interaction: using movies to automatically learn users’ emotional states
The HCI community is actively seeking novel methodologies to gain insight into the user's experience during interaction with both the application and the content. We propose an emotional recognition engine capable of automatically recognizing a set of human emotional states using psychophysiological measures of the autonomous nervous system, including galvanic skin response, respiration, and heart rate. A novel pattern recognition system, based on discriminant analysis and support vector machine classifiers is trained using movies' scenes selected to induce emotions ranging from the positive to the negative valence dimension, including happiness, anger, disgust, sadness, and fear. In this paper we introduce an emotion recognition system and evaluate its accuracy by presenting the results of an experiment conducted with three physiologic sensors.info:eu-repo/semantics/publishedVersio
Multi-score Learning for Affect Recognition: the Case of Body Postures
An important challenge in building automatic affective state
recognition systems is establishing the ground truth. When the groundtruth
is not available, observers are often used to label training and testing
sets. Unfortunately, inter-rater reliability between observers tends to
vary from fair to moderate when dealing with naturalistic expressions.
Nevertheless, the most common approach used is to label each expression
with the most frequent label assigned by the observers to that expression.
In this paper, we propose a general pattern recognition framework
that takes into account the variability between observers for automatic
affect recognition. This leads to what we term a multi-score learning
problem in which a single expression is associated with multiple values
representing the scores of each available emotion label. We also propose
several performance measurements and pattern recognition methods for
this framework, and report the experimental results obtained when testing
and comparing these methods on two affective posture datasets
A game-based corpus for analysing the interplay between game context and player experience
Recognizing players’ affective state while playing video games has been the focus of many recent research studies. In this paper we describe the process that has been followed to build a corpus based on game events and recorded video sessions from human players while playing Super Mario Bros. We present different types of information that have been extracted from game context, player preferences and perception of the game, as well as user features, automatically extracted from video recordings. We run a number of initial experiments to analyse players’ behavior while playing video games as a case study of the possible use of the corpus.peer-reviewe
Improving QPF by blending techniques at the Meteorological Service of Catalonia
The current operational very short-term and short-term quantitative precipitation forecast (QPF) at the Meteorological Service of Catalonia (SMC) is made by three different methodologies: Advection of the radar reflectivity field (ADV), Identification, tracking and forecasting of convective structures (CST) and numerical weather prediction (NWP) models using observational data assimilation (radar, satellite, etc.). These precipitation forecasts have different characteristics, lead time and spatial resolutions. The objective of this study is to combine these methods in order to obtain a single and optimized QPF at each lead time. This combination (blending) of the radar forecast (ADV and CST) and precipitation forecast from NWP model is carried out by means of different methodologies according to the prediction horizon. Firstly, in order to take advantage of the rainfall location and intensity from radar observations, a phase correction technique is applied to the NWP output to derive an additional corrected forecast (MCO). To select the best precipitation estimation in the first and second hour (t+1 h and t+2 h), the information from radar advection (ADV) and the corrected outputs from the model (MCO) are mixed by using different weights, which vary dynamically, according to indexes that quantify the quality of these predictions. This procedure has the ability to integrate the skill of rainfall location and patterns that are given by the advection of radar reflectivity field with the capacity of generating new precipitation areas from the NWP models. From the third hour (t+3 h), as radar-based forecasting has generally low skills, only the quantitative precipitation forecast from model is used. This blending of different sources of prediction is verified for different types of episodes (convective, moderately convective and stratiform) to obtain a robust methodology for implementing it in an operational and dynamic wa
Computing emotion awareness through galvanic skin response and facial electromyography
To improve human-computer interaction (HCI), computers need to recognize and respond properly to their user’s emotional state. This is a fundamental application of affective computing, which relates to, arises from, or deliberately influences emotion. As a first step to a system that recognizes emotions of individual users, this research focuses on how emotional experiences are expressed in six parameters (i.e., mean, absolute deviation, standard deviation, variance, skewness, and kurtosis) of not baseline-corrected physiological measurements of the galvanic skin response (GSR) and of three electromyography signals: frontalis (EMG1), corrugator supercilii (EMG2), and zygomaticus major (EMG3). The 24 participants were asked to watch film scenes of 120 seconds, which they rated afterward. These ratings enabled us to distinguish four categories of emotions: negative, positive, mixed, and neutral. The skewness and kurtosis of the GSR, the skewness of the EMG2, and four parameters of EMG3, discriminate between the four emotion categories. This, despite the coarse time windows that were used. Moreover, rapid processing of the signals proved to be possible. This enables tailored HCI facilitated by an emotional awareness of systems
ERiSA: building emotionally realistic social game-agents companions
We propose an integrated framework for social and emotional game-agents to enhance their believability and quality of interaction, in particular by allowing an agent to forge social relations and make appropriate use of social signals. The framework is modular including sensing, interpretation, behaviour generation, and game components. We propose a generic formulation of action selection rules based on observed social and emotional signals, the agent’s personality, and the social relation between agent and player. The rules are formulated such that its variables can easily be obtained from real data. We illustrate and evaluate our framework using a simple social game called The Smile Game
Affective Man-Machine Interface: Unveiling human emotions through biosignals
As is known for centuries, humans exhibit an electrical profile. This profile is altered through various psychological and physiological processes, which can be measured through biosignals; e.g., electromyography (EMG) and electrodermal activity (EDA). These biosignals can reveal our emotions and, as such, can serve as an advanced man-machine interface (MMI) for empathic consumer products. However, such a MMI requires the correct classification of biosignals to emotion classes. This chapter starts with an introduction on biosignals for emotion detection. Next, a state-of-the-art review is presented on automatic emotion classification. Moreover, guidelines are presented for affective MMI. Subsequently, a research is presented that explores the use of EDA and three facial EMG signals to determine neutral, positive, negative, and mixed emotions, using recordings of 21 people. A range of techniques is tested, which resulted in a generic framework for automated emotion classification with up to 61.31% correct classification of the four emotion classes, without the need of personal profiles. Among various other directives for future research, the results emphasize the need for parallel processing of multiple biosignals
Tune in to your emotions: a robust personalized affective music player
The emotional power of music is exploited in a personalized affective music player (AMP) that selects music for mood enhancement. A biosignal approach is used to measure listeners’ personal emotional reactions to their own music as input for affective user models. Regression and kernel density estimation are applied to model the physiological changes the music elicits. Using these models, personalized music selections based on an affective goal state can be made. The AMP was validated in real-world trials over the course of several weeks. Results show that our models can cope with noisy situations and handle large inter-individual differences in the music domain. The AMP augments music listening where its techniques enable automated affect guidance. Our approach provides valuable insights for affective computing and user modeling, for which the AMP is a suitable carrier application
- …