15 research outputs found

    Computing emotion awareness through galvanic skin response and facial electromyography

    Get PDF
    To improve human-computer interaction (HCI), computers need to recognize and respond properly to their user’s emotional state. This is a fundamental application of affective computing, which relates to, arises from, or deliberately influences emotion. As a first step to a system that recognizes emotions of individual users, this research focuses on how emotional experiences are expressed in six parameters (i.e., mean, absolute deviation, standard deviation, variance, skewness, and kurtosis) of not baseline-corrected physiological measurements of the galvanic skin response (GSR) and of three electromyography signals: frontalis (EMG1), corrugator supercilii (EMG2), and zygomaticus major (EMG3). The 24 participants were asked to watch film scenes of 120 seconds, which they rated afterward. These ratings enabled us to distinguish four categories of emotions: negative, positive, mixed, and neutral. The skewness and kurtosis of the GSR, the skewness of the EMG2, and four parameters of EMG3, discriminate between the four emotion categories. This, despite the coarse time windows that were used. Moreover, rapid processing of the signals proved to be possible. This enables tailored HCI facilitated by an emotional awareness of systems

    Endowing Spoken Language Dialogue System with Emotional Intelligence

    Get PDF

    Using Noninvasive Brain Measurement to Explore the Psychological Effects of Computer Malfunctions on Users during Human-Computer Interactions

    Full text link
    In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional nearinfrared spectroscopy (fNIRS) and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions

    Social Attitude Towards A Conversational Character

    Full text link

    Towards Advanced Learner Modeling: Discussions on Quasi Real-time Adaptation with Physiological Data

    Full text link

    Speaker-independent emotion recognition exploiting a psychologically-inspired binary cascade classification schema

    No full text
    In this paper, a psychologically-inspired binary cascade classification schema is proposed for speech emotion recognition. Performance is enhanced because commonly confused pairs of emotions are distinguishable from one another. Extracted features are related to statistics of pitch, formants, and energy contours, as well as spectrum, cepstrum, perceptual and temporal features, autocorrelation, MPEG-7 descriptors, Fujisakis model parameters, voice quality, jitter, and shimmer. Selected features are fed as input to K nearest neighborhood classifier and to support vector machines. Two kernels are tested for the latter: Linear and Gaussian radial basis function. The recently proposed speaker-independent experimental protocol is tested on the Berlin emotional speech database for each gender separately. The best emotion recognition accuracy, achieved by support vector machines with linear kernel, equals 87.7%, outperforming state-of-the-art approaches. Statistical analysis is first carried out with respect to the classifiers error rates and then to evaluate the information expressed by the classifiers confusion matrices. © Springer Science+Business Media, LLC 2011

    MARLUI: Multi-Agent Reinforcement Learning for Adaptive UIs

    Full text link
    Adaptive user interfaces (UIs) automatically change an interface to better support users' tasks. Recently, machine learning techniques have enabled the transition to more powerful and complex adaptive UIs. However, a core challenge for adaptive user interfaces is the reliance on high-quality user data that has to be collected offline for each task. We formulate UI adaptation as a multi-agent reinforcement learning problem to overcome this challenge. In our formulation, a user agent mimics a real user and learns to interact with a UI. Simultaneously, an interface agent learns UI adaptations to maximize the user agent's performance. The interface agent learns the task structure from the user agent's behavior and, based on that, can support the user agent in completing its task. Our method produces adaptation policies that are learned in simulation only and, therefore, does not need real user data. Our experiments show that learned policies generalize to real users and achieve on par performance with data-driven supervised learning baselines

    Effet des actions pédagogiques sur l'état émotionnel de l'apprenant dans un système tutoriel intelligent

    Full text link
    Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal

    Machine Learning Methods for functional Near Infrared Spectroscopy

    Get PDF
    Identification of user state is of interest in a wide range of disciplines that fall under the umbrella of human machine interaction. Functional Near Infra-Red Spectroscopy (fNIRS) device is a relatively new device that enables inference of brain activity through non-invasively pulsing infra-red light into the brain. The fNIRS device is particularly useful as it has a better spatial resolution than the Electroencephalograph (EEG) device that is most commonly used in Human Computer Interaction studies under ecologically valid settings. But this key advantage of fNIRS device is underutilized in current literature in the fNIRS domain. We propose machine learning methods that capture this spatial nature of the human brain activity using a novel preprocessing method that uses `Region of Interest\u27 based feature extraction. Experiments show that this method outperforms the F1 score achieved previously in classifying `low\u27 vs `high\u27 valence state of a user. We further our analysis by applying a Convolutional Neural Network (CNN) to the fNIRS data, thus preserving the spatial structure of the data and treating the data similar to a series of images to be classified. Going further, we use a combination of CNN and Long Short-Term Memory (LSTM) to capture the spatial and temporal behavior of the fNIRS data, thus treating it similar to a video classification problem. We show that this method improves upon the accuracy previously obtained by valence classification methods using EEG or fNIRS devices. Finally, we apply the above model to a problem in classifying combined task-load and performance in an across-subject, across-task scenario of a Human Machine Teaming environment in order to achieve optimal productivity of the system
    corecore