637 research outputs found

    Affective games:a multimodal classification system

    Get PDF
    Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation

    Automatic Discrimination of Laughter Using Distributed sEMG

    Get PDF
    Laughter is a very interesting non-verbal human vocalization. It is classified as a semi voluntary behavior despite being a direct form of social interaction, and can be elicited by a variety of very different stimuli, both cognitive and physical. Automatic laughter detection, analysis and classification will boost progress in affective computing, leading to the development of more natural human-machine communication interfaces. Surface Electromyography (sEMG) on abdominal muscles or invasive EMG on the larynx show potential in this direction, but these kinds of EMG-based sensing systems cannot be used in ecological settings due to their size, lack of reusability and uncomfortable setup. For this reason, they cannot be easily used for natural detection and measurement of a volatile social behavior like laughter in a variety of different situations. We propose the use of miniaturized, wireless, dry-electrode sEMG sensors on the neck for the detection and analysis of laughter. Even if with this solution the activation of specific larynx muscles cannot be precisely measured, it is possible to detect different EMG patterns related to larynx function. In addition, integrating sEMG analysis on a multisensory compact system positioned on the neck would improve the overall robustness of the whole sensing system, enabling the synchronized measure of different characteristics of laughter, like vocal production, head movement or facial expression; being at the same time less intrusive, as the neck is normally more accessible than abdominal muscles. In this paper, we report laughter discrimination rate obtained with our system depending on different conditions

    Towards an Effectve Arousal Detecton System for Virtual Reality

    Get PDF
    Immersive technologies offer the potential to drive engagement and create exciting experiences. A better understanding of the emotional state of the user within immersive experiences can assist in healthcare interventions and the evaluation of entertainment technologies. This work describes a feasibility study to explore the effect of affective video content on heart-rate recordings for Virtual Reality applications. A lowcost reflected-mode photoplethysmographic sensor and an electrocardiographic chest-belt sensor were attached on a novel non-invasive wearable interface specially designed for this study. 11 participants responses were analysed, and heart-rate metrics were used for arousal classification. The reported results demonstrate that the fusion of physiological signals yields to significant performance improvement; and hence the feasibility of our new approach

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Impact of annotation modality on label quality and model performance in the automatic assessment of laughter in-the-wild

    Full text link
    Laughter is considered one of the most overt signals of joy. Laughter is well-recognized as a multimodal phenomenon but is most commonly detected by sensing the sound of laughter. It is unclear how perception and annotation of laughter differ when annotated from other modalities like video, via the body movements of laughter. In this paper we take a first step in this direction by asking if and how well laughter can be annotated when only audio, only video (containing full body movement information) or audiovisual modalities are available to annotators. We ask whether annotations of laughter are congruent across modalities, and compare the effect that labeling modality has on machine learning model performance. We compare annotations and models for laughter detection, intensity estimation, and segmentation, three tasks common in previous studies of laughter. Our analysis of more than 4000 annotations acquired from 48 annotators revealed evidence for incongruity in the perception of laughter, and its intensity between modalities. Further analysis of annotations against consolidated audiovisual reference annotations revealed that recall was lower on average for video when compared to the audio condition, but tended to increase with the intensity of the laughter samples. Our machine learning experiments compared the performance of state-of-the-art unimodal (audio-based, video-based and acceleration-based) and multi-modal models for different combinations of input modalities, training label modality, and testing label modality. Models with video and acceleration inputs had similar performance regardless of training label modality, suggesting that it may be entirely appropriate to train models for laughter detection from body movements using video-acquired labels, despite their lower inter-rater agreement

    Laughter as a controller in a stress buster game

    Get PDF

    Toward Emotion Recognition From Physiological Signals in the Wild: Approaching the Methodological Issues in Real-Life Data Collection

    Get PDF
    Emotion, mood, and stress recognition (EMSR) has been studied in laboratory settings for decades. In particular, physiological signals are widely used to detect and classify affective states in lab conditions. However, physiological reactions to emotional stimuli have been found to differ in laboratory and natural settings. Thanks to recent technological progress (e.g., in wearables) the creation of EMSR systems for a large number of consumers during their everyday activities is increasingly possible. Therefore, datasets created in the wild are needed to insure the validity and the exploitability of EMSR models for real-life applications. In this paper, we initially present common techniques used in laboratory settings to induce emotions for the purpose of physiological dataset creation. Next, advantages and challenges of data collection in the wild are discussed. To assess the applicability of existing datasets to real-life applications, we propose a set of categories to guide and compare at a glance different methodologies used by researchers to collect such data. For this purpose, we also introduce a visual tool called Graphical Assessment of Real-life Application-Focused Emotional Dataset (GARAFED). In the last part of the paper, we apply the proposed tool to compare existing physiological datasets for EMSR in the wild and to show possible improvements and future directions of research. We wish for this paper and GARAFED to be used as guidelines for researchers and developers who aim at collecting affect-related data for real-life EMSR-based applications

    Opportunistic and Context-aware Affect Sensing on Smartphones: The Concept, Challenges and Opportunities

    Full text link
    Opportunistic affect sensing offers unprecedented potential for capturing spontaneous affect ubiquitously, obviating biases inherent in the laboratory setting. Facial expression and voice are two major affective displays, however most affect sensing systems on smartphone avoid them due to extensive power requirement. Encouragingly, due to the recent advent of low-power DSP (Digital Signal Processing) co-processor and GPU (Graphics Processing Unit) technology, audio and video sensing are becoming more feasible. To properly evaluate opportunistically captured facial expression and voice, contextual information about the dynamic audio-visual stimuli needs to be inferred. This paper discusses recent advances of affect sensing on the smartphone and identifies the key barriers and potential solutions of implementing opportunistic and context-aware affect sensing on smartphone platforms
    • …
    corecore