4 research outputs found

    A Survey on Emotion Recognition for Human Robot Interaction

    Get PDF
    With the recent developments of technology and the advances in artificial intelligent and machine learning techniques, it becomes possible for the robot to acquire and show the emotions as a part of Human-Robot Interaction (HRI). An emotional robot can recognize the emotional states of humans so that it will be able to interact more naturally with its human counterpart in different environments. In this article, a survey on emotion recognition for HRI systems has been presented. The survey aims to achieve two objectives. Firstly, it aims to discuss the main challenges that face researchers when building emotional HRI systems. Secondly, it seeks to identify sensing channels that can be used to detect emotions and provides a literature review about recent researches published within each channel, along with the used methodologies and achieved results. Finally, some of the existing emotion recognition issues and recommendations for future works have been outlined

    Automatic Emotion Recognition From Multimodal Information Fusion Using Deep Learning Approaches

    Get PDF
    During recent years, the advances in computational and information systems have contributed to the growth of research areas, including a ective computing, which aims to identify the emotional states of humans to develop di erent interaction and computational systems. For doing so, emotions have been characterized by speci c kind of data including audio, facial expressions, physiological signals, among others. However, the natural response of data to a single emotional event suggests a correlation in different modalities when it achieves a maximum peak of expression. This fact could lead the thinking that the processing of multiple data modalities (multimodal information fusion) could provide more learning patterns to perform emotion recognition. On the other hand, Deep Learning strategies have gained interest in the research community from 2012, since they are adaptive models which have shown promising results in the analysis of many kinds of data, such as images, signals, and other temporal data. This work aims to determine if information fusion using Deep Neural Network architectures improves the recognition of emotions in comparison with the use of unimodal models. Thus, a new information fusion model based on Deep Neural Network architectures is proposed to recognize the emotional states from audio-visual information. The proposal takes advantage of the adaptiveness of the Deep Learning models to extract deep features according to the input data type. The proposed approach was developed in three stages. In a rst stage, characterization and preprocessing algorithms (INTERSPEECH 2010 Paralinguistic challenge features in audio and Viola Jones face detection in video) were used for dimensionality reduction and detection of the main information from raw data. Then, two models based on unimodal analysis were developed for processing audio and video separately. These models were used for developing two information fusion strategies: a decision fusion and a characteristic fusion model, respectively. All models were evaluated using the eNTERFACE database, a well-known public audiovisual emotional dataset, which allows compare results with state of the art methods. Experimental results showed that Deep Learning approaches that fused the audio and visual information outperform the unimodal strategies.Magister en AutomatizaciĂłn y Contro

    Experience-driven MAR games: Personalising Mobile Augmented Reality games using Player Models

    Get PDF
    PhD ThesesWe are witnessing an unprecedented growth of Mobile Augmented Reality (MAR) technologies, one of the main research areas being MAR games. While this field is still in its early days, researchers have shown the physical health benefits of playing these type of games. Computational models have been used in traditional (non-AR) digital games to predict player experience (PX). These models give designers insights about PX, and can also be used within games for real-time adaptation or personalised content generation. Following these findings, this thesis investigates the potential of creating models that use movement data and game metrics to predict PX. An initial pilot study is conducted to evaluate the use of movement data and game metrics to predict players’ emotional preferences between different game levels of an exploration-based MAR game. Results indicate that emotional preferences regarding frustration (≈ 93%) and challenge (≈ 93%) can be predicted to a reliable and reasonable degree of accuracy. To determine if these techniques can be applied to serious games for health, an AR exergame is developed for experiments two, three and four of this thesis. The second and third experiments aim to predict key experiential constructs, player competence and immersion, that are important to PX. These experiments further validate the use of movement data and game metrics to model different aspects of PX in MAR games. Results suggest that players’ competence (≈ 73%) and sense of mastery (≈ 81%) can be predicted to a reasonable degree of accuracy. For the final experiment, this mastery model is used to create a dynamic difficulty adaptation (DDA) system. The adaptive exergame is then evaluated against a non-adaptive variant of the same game. Results indicate that the adaptive game makes players feel a higher sense of confidence during gameplay and that the adaptation mechanics are more effective for players who do not engage in regular physical activity. Across the four studies presented, this thesis is the first known research activity that investigates using movement data and game metrics to model PX for DDA in MAR games and makes the following novel contributions: i) movement data and game metrics can be used to predict player’s sense of mastery or competence reliably compared to other aspects of PX tested, ii) mastery-based game adaptation makes players feel greater confidence during game-play, and iii) mastery-based game adaptation is more effective for players who do not engage in physical activity. This work also presents a new methodology for PX prediction in MAR games and a novel adaptation engine driven by player mastery. In summary, this thesis proposes that PX modelling can be successfully applied to MAR games, especially for DDA which results in a highly personalised PX and shows potential as a tool for increasing physical activity
    corecore