4,724 research outputs found

    I Probe, Therefore I Am: Designing a Virtual Journalist with Human Emotions

    Get PDF
    By utilizing different communication channels, such as verbal language, gestures or facial expressions, virtually embodied interactive humans hold a unique potential to bridge the gap between human-computer interaction and actual interhuman communication. The use of virtual humans is consequently becoming increasingly popular in a wide range of areas where such a natural communication might be beneficial, including entertainment, education, mental health research and beyond. Behind this development lies a series of technological advances in a multitude of disciplines, most notably natural language processing, computer vision, and speech synthesis. In this paper we discuss a Virtual Human Journalist, a project employing a number of novel solutions from these disciplines with the goal to demonstrate their viability by producing a humanoid conversational agent capable of naturally eliciting and reacting to information from a human user. A set of qualitative and quantitative evaluation sessions demonstrated the technical feasibility of the system whilst uncovering a number of deficits in its capacity to engage users in a way that would be perceived as natural and emotionally engaging. We argue that naturalness should not always be seen as a desirable goal and suggest that deliberately suppressing the naturalness of virtual human interactions, such as by altering its personality cues, might in some cases yield more desirable results.Comment: eNTERFACE16 proceeding

    Affect and believability in game characters:a review of the use of affective computing in games

    Get PDF
    Virtual agents are important in many digital environments. Designing a character that highly engages users in terms of interaction is an intricate task constrained by many requirements. One aspect that has gained more attention recently is the effective dimension of the agent. Several studies have addressed the possibility of developing an affect-aware system for a better user experience. Particularly in games, including emotional and social features in NPCs adds depth to the characters, enriches interaction possibilities, and combined with the basic level of competence, creates a more appealing game. Design requirements for emotionally intelligent NPCs differ from general autonomous agents with the main goal being a stronger player-agent relationship as opposed to problem solving and goal assessment. Nevertheless, deploying an affective module into NPCs adds to the complexity of the architecture and constraints. In addition, using such composite NPC in games seems beyond current technology, despite some brave attempts. However, a MARPO-type modular architecture would seem a useful starting point for adding emotions

    Reverse Engineering Psychologically Valid Facial Expressions of Emotion into Social Robots

    Get PDF
    Social robots are now part of human society, destined for schools, hospitals, and homes to perform a variety of tasks. To engage their human users, social robots must be equipped with the essential social skill of facial expression communication. Yet, even state-of-the-art social robots are limited in this ability because they often rely on a restricted set of facial expressions derived from theory with well-known limitations such as lacking naturalistic dynamics. With no agreed methodology to objectively engineer a broader variance of more psychologically impactful facial expressions into the social robots' repertoire, human-robot interactions remain restricted. Here, we address this generic challenge with new methodologies that can reverse-engineer dynamic facial expressions into a social robot head. Our data-driven, user-centered approach, which combines human perception with psychophysical methods, produced highly recognizable and human-like dynamic facial expressions of the six classic emotions that generally outperformed state-of-art social robot facial expressions. Our data demonstrates the feasibility of our method applied to social robotics and highlights the benefits of using a data-driven approach that puts human users as central to deriving facial expressions for social robots. We also discuss future work to reverse-engineer a wider range of socially relevant facial expressions including conversational messages (e.g., interest, confusion) and personality traits (e.g., trustworthiness, attractiveness). Together, our results highlight the key role that psychology must continue to play in the design of social robots

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    ABC-EBDI: A cognitive-affective framework to support the modeling of believable intelligent agents.

    Get PDF
    El Grupo de Investigación de Interfaces Avanzadas (AffectiveLab), es un grupo reconocido por el Gobierno de Aragón (T60-20R) cuya actividad se enmarca en el área de la Interacción Humano-Computadora (IHC). Su actividad investigadora se ha centrado, en los últimos años, en cuatro temas principales: interacción natural, informática afectiva, accesibilidad e interfaces basadas en agentes inteligentes, siendo esta última en la que se enmarca esta tesis doctoral. Más concretamente, la realización de esta tesis doctoral se enmarca dentro de los proyectos de investigación nacionales JUGUEMOS (TIN2015-67149-C3-1R) y PERGAMEX (RTI2018-096986-B-C31). Una de sus líneas de investigación se centra en el desarrollo de arquitecturas cognitivo-afectivas para apoyar el modelado afectivo de los agentes inteligentes. El AffectiveLab tiene una sólida experiencia en el uso de agentes de interfaz incorporados que exhiben expresiones afectivas corporales y faciales (Baldassarri et al., 2008). En los últimos años, se han centrado en el modelado del comportamiento de los agentes inteligentes (Pérez et al., 2017).La definición de agente inteligente es un tema controvertido, pero se puede decir que es una entidad autónoma que recibe información dinámica del entorno a través de sensores y actúa sobre el medio ambiente a través de actuadores, mostrando un comportamiento dirigido a un objetivo (Russell et al., 2003). El modelado de los procesos cognitivos en los agentes inteligentes se basa en diferentes teorías (Moore, 1980; Newell, 1994; Bratman, 1987) que explican, desde diferentes puntos de vista, el funcionamiento de la mente humana. Los agentes inteligentes implementados sobre la base de una teoría cognitiva se conocen como agentes cognitivos. Los más desarrollados son los que se basan en arquitecturas cognitivas, como Soar (Laird et al., 1987), ACT-R (Anderson, 1993) y BDI (Rao and Georgeff, 1995). Comparado con Soar y otras arquitecturas complejas, BDI se destaca por su simplicidad y versatilidad. BDI ofrece varias características que la hacen popular, como su capacidad para explicar el comportamiento del agente en cada momento, haciendo posible una interacción dinámica con el entorno. Debido a la creciente popularidad del marco BDI se ha utilizado para apoyar el modelado de agentes inteligentes (Larsen, 2019; (Cranefield and Dignum, 2019). En los últimos años, también han aparecido propuestas de BDI que integran aspectos afectivos. Los agentes inteligentes construidos en base a la arquitectura BDI que también incorporan capacidades afectivas, se conocen como agentes EBDI (Emotional BDI) y son el foco de esta tesis. El objetivo principal de esta tesis ha sido proponer un marco cognitivo-afectivo basado en el BDI que sustente el modelado cognitivo-afectivo de los agentes inteligentes. La finalidad es ser capaz de reproducir un comportamiento humano creíble en situaciones complejas donde el comportamiento humano es variado y bastante impredecible. El objetivo propuesto se ha logrado con éxito en los términos descritos a continuación:• Se ha elaborado un exhaustivo estado del arte relacionado con los modelos afectivos más utilizados para modelar los aspectos afectivos en los agentes inteligentes.• Se han estudiado las arquitecturas de BDI y las propuestas previas de EBDI. El estudio, que dio lugar a una publicación (Sánchez-López and Cerezo, 2019), permitió detectar las cuestiones abiertas en el área, y la necesidad de considerar todos los aspectos de la afectividad (emociones, estado de ánimo, personalidad) y su influencia en todas las etapas cognitivas. El marco resultante de este trabajo doctoral incluye también el modelado de la conducta y el comportamiento comunicativo, que no habían sido considerados hasta ahora en el modelado de los agentes inteligentes. Estos aspectos colocan al marco resultante entre EBDI los más avanzados de la literatura. • Se ha diseñado e implementado un marco basado en el BDI para soportar el modelado cognitivo, afectivo y conductual de los agentes inteligentes, denominado ABC-EBDI (Sanchez et al., 2020) (Sánchez et al., 2019). Se trata de la primera aplicación de un modelo psicológico muy conocido, el modelo ABC de Ellis, a la simulación de agentes inteligentes humanos realistas. Esta aplicación implica:o La ampliación del concepto de creencias. En el marco se consideran tres tipos de creencias: creencias básicas, creencias de contexto y comportamientos operantes. Las creencias básicas representan la información general que el agente tiene sobre sí mismo y el entorno. Las conductas operantes permiten modelar la conducta reactiva del agente a través de las conductas aprendidas. Las creencias de contexto, que se representan en forma de cogniciones frías y calientes, se procesan para clasificarlas en creencias irracionales y racionales siguiendo las ideas de Ellis. Es la consideración de creencias irracionales/racionales porque abre la puerta a la simulación de reacciones humanas realistas.o La posibilidad de gestionar de forma unificada las consecuencias de los acontecimientos en términos de consecuencias afectivas y de comportamiento (conducta). Las creencias de contexto racionales conducen a emociones funcionales y a una conducta adaptativa, mientras que las creencias de contexto irracionales conducen a emociones disfuncionales y a una conducta maladaptativa. Este carácter funcional/disfuncional de las emociones no se había utilizado nunca antes en el contexto del BDI. Además, el modelado conductual se ha ampliado con el modelado de estilos comunicativos, basado en el modelo Satir, tampoco aplicado previamente al modelado de agentes inteligentes. El modelo de Satir considera gestos corporales, expresiones faciales, voz, entonación y estructuras lingüísticas.• Se ha elegido un caso de uso, "I wish a had better news" para la aplicación del marco propuesto y se han realizado dos tipos de evaluaciones, por parte de expertos y de usuarios. La evaluación ha confirmado el gran potencial del marco propuesto para reproducir un comportamiento humano realista y creíble en situaciones complejas.<br /

    iGrace – Emotional Computational Model for EmI Companion Robot.

    Get PDF
    Chapitre 4We will discuss in this chapter the research in the field of emotional interaction, to maintain a non-verbal interaction with children from 4 to 8 years. This work fits into the EmotiRob project, whose goal is to comfort the children vulnerable and / or in hospitalization with an emotional robot companion. The use of robots in hospitals is still limited; we decided to put forward simple robot architecture and therefore, the emotional expression. In this context, a robot too complex and too voluminous must be avoided. After a study of advanced research on perception and emotional synthesis, it was important to determine the most appropriate way to express emotions in order to have a recognition rate acceptable to our target. Following an experiment on this subject, we were able to determine the degrees of freedom needed for the robot to express the six primary emotions. The second step was the definition and description of our emotional model. In order to have a wide range of expressions, while respecting the number of degrees of freedom, we use the concepts of emotional experiences. They provide us with almost two hundred different behaviors for the model. However we decide as a first step to limit ourselves to only fifty behaviors. This diversification is possible thanks to a mix of emotions linked to the dynamics of emotions. This theoretical model now established, we have started various experiments on a variety of audiences in order to validate the first time in its relevance and the rate of recognition of emotions. The first experiment was performed using a simulator for the capture of speech and the emotional and behavioral synthesis of the robot. This, validates the model assumptions that will be integrated EMI - Emotional Model of Interaction. Future phases of the project will evaluate the robot, both in its expression than in providing comfort to children. We describe the protocols used and present the results for EMI. These experiments will allow us to adjust and adapt the model. We will finish this chapter with a brief description of the robot's architecture, and the improvements to be made for the second version of EMI

    From Affect Theoretical Foundations to Computational Models of Intelligent Affective Agents

    Full text link
    [EN] The links between emotions and rationality have been extensively studied and discussed. Several computational approaches have also been proposed to model these links. However, is it possible to build generic computational approaches and languages so that they can be "adapted " when a specific affective phenomenon is being modeled? Would these approaches be sufficiently and properly grounded? In this work, we want to provide the means for the development of these generic approaches and languages by making a horizontal analysis inspired by philosophical and psychological theories of the main affective phenomena that are traditionally studied. Unfortunately, not all the affective theories can be adapted to be used in computational models; therefore, it is necessary to perform an analysis of the most suitable theories. In this analysis, we identify and classify the main processes and concepts which can be used in a generic affective computational model, and we propose a theoretical framework that includes all these processes and concepts that a model of an affective agent with practical reasoning could use. Our generic theoretical framework supports incremental research whereby future proposals can improve previous ones. This framework also supports the evaluation of the coverage of current computational approaches according to the processes that are modeled and according to the integration of practical reasoning and affect-related issues. This framework is being used in the development of the GenIA(3) architecture.This work is partially supported by the Spanish Government projects PID2020-113416RB-I00, GVA-CEICE project PROMETEO/2018/002, and TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215.Alfonso, B.; Taverner-Aparicio, JJ.; Vivancos, E.; Botti, V. (2021). From Affect Theoretical Foundations to Computational Models of Intelligent Affective Agents. Applied Sciences. 11(22):1-29. https://doi.org/10.3390/app112210874S129112

    Real time multimodal interaction with animated virtual human

    Get PDF
    This paper describes the design and implementation of a real time animation framework in which animated virtual human is capable of performing multimodal interactions with human user. The animation system consists of several functional components, namely perception, behaviours generation, and motion generation. The virtual human agent in the system has a complex underlying geometry structure with multiple degrees of freedom (DOFs). It relies on a virtual perception system to capture information from its environment and respond to human user's commands by a combination of non-verbal behaviours including co-verbal gestures, posture, body motions and simple utterances. A language processing module is incorporated to interpret user's command. In particular, an efficient motion generation method has been developed to combines both motion captured data and parameterized actions generated in real time to produce variations in agent's behaviours depending on its momentary emotional states
    corecore