318 research outputs found

    ABC-EBDI: A cognitive-affective framework to support the modeling of believable intelligent agents.

    Get PDF
    El Grupo de Investigación de Interfaces Avanzadas (AffectiveLab), es un grupo reconocido por el Gobierno de Aragón (T60-20R) cuya actividad se enmarca en el área de la Interacción Humano-Computadora (IHC). Su actividad investigadora se ha centrado, en los últimos años, en cuatro temas principales: interacción natural, informática afectiva, accesibilidad e interfaces basadas en agentes inteligentes, siendo esta última en la que se enmarca esta tesis doctoral. Más concretamente, la realización de esta tesis doctoral se enmarca dentro de los proyectos de investigación nacionales JUGUEMOS (TIN2015-67149-C3-1R) y PERGAMEX (RTI2018-096986-B-C31). Una de sus líneas de investigación se centra en el desarrollo de arquitecturas cognitivo-afectivas para apoyar el modelado afectivo de los agentes inteligentes. El AffectiveLab tiene una sólida experiencia en el uso de agentes de interfaz incorporados que exhiben expresiones afectivas corporales y faciales (Baldassarri et al., 2008). En los últimos años, se han centrado en el modelado del comportamiento de los agentes inteligentes (Pérez et al., 2017).La definición de agente inteligente es un tema controvertido, pero se puede decir que es una entidad autónoma que recibe información dinámica del entorno a través de sensores y actúa sobre el medio ambiente a través de actuadores, mostrando un comportamiento dirigido a un objetivo (Russell et al., 2003). El modelado de los procesos cognitivos en los agentes inteligentes se basa en diferentes teorías (Moore, 1980; Newell, 1994; Bratman, 1987) que explican, desde diferentes puntos de vista, el funcionamiento de la mente humana. Los agentes inteligentes implementados sobre la base de una teoría cognitiva se conocen como agentes cognitivos. Los más desarrollados son los que se basan en arquitecturas cognitivas, como Soar (Laird et al., 1987), ACT-R (Anderson, 1993) y BDI (Rao and Georgeff, 1995). Comparado con Soar y otras arquitecturas complejas, BDI se destaca por su simplicidad y versatilidad. BDI ofrece varias características que la hacen popular, como su capacidad para explicar el comportamiento del agente en cada momento, haciendo posible una interacción dinámica con el entorno. Debido a la creciente popularidad del marco BDI se ha utilizado para apoyar el modelado de agentes inteligentes (Larsen, 2019; (Cranefield and Dignum, 2019). En los últimos años, también han aparecido propuestas de BDI que integran aspectos afectivos. Los agentes inteligentes construidos en base a la arquitectura BDI que también incorporan capacidades afectivas, se conocen como agentes EBDI (Emotional BDI) y son el foco de esta tesis. El objetivo principal de esta tesis ha sido proponer un marco cognitivo-afectivo basado en el BDI que sustente el modelado cognitivo-afectivo de los agentes inteligentes. La finalidad es ser capaz de reproducir un comportamiento humano creíble en situaciones complejas donde el comportamiento humano es variado y bastante impredecible. El objetivo propuesto se ha logrado con éxito en los términos descritos a continuación:• Se ha elaborado un exhaustivo estado del arte relacionado con los modelos afectivos más utilizados para modelar los aspectos afectivos en los agentes inteligentes.• Se han estudiado las arquitecturas de BDI y las propuestas previas de EBDI. El estudio, que dio lugar a una publicación (Sánchez-López and Cerezo, 2019), permitió detectar las cuestiones abiertas en el área, y la necesidad de considerar todos los aspectos de la afectividad (emociones, estado de ánimo, personalidad) y su influencia en todas las etapas cognitivas. El marco resultante de este trabajo doctoral incluye también el modelado de la conducta y el comportamiento comunicativo, que no habían sido considerados hasta ahora en el modelado de los agentes inteligentes. Estos aspectos colocan al marco resultante entre EBDI los más avanzados de la literatura. • Se ha diseñado e implementado un marco basado en el BDI para soportar el modelado cognitivo, afectivo y conductual de los agentes inteligentes, denominado ABC-EBDI (Sanchez et al., 2020) (Sánchez et al., 2019). Se trata de la primera aplicación de un modelo psicológico muy conocido, el modelo ABC de Ellis, a la simulación de agentes inteligentes humanos realistas. Esta aplicación implica:o La ampliación del concepto de creencias. En el marco se consideran tres tipos de creencias: creencias básicas, creencias de contexto y comportamientos operantes. Las creencias básicas representan la información general que el agente tiene sobre sí mismo y el entorno. Las conductas operantes permiten modelar la conducta reactiva del agente a través de las conductas aprendidas. Las creencias de contexto, que se representan en forma de cogniciones frías y calientes, se procesan para clasificarlas en creencias irracionales y racionales siguiendo las ideas de Ellis. Es la consideración de creencias irracionales/racionales porque abre la puerta a la simulación de reacciones humanas realistas.o La posibilidad de gestionar de forma unificada las consecuencias de los acontecimientos en términos de consecuencias afectivas y de comportamiento (conducta). Las creencias de contexto racionales conducen a emociones funcionales y a una conducta adaptativa, mientras que las creencias de contexto irracionales conducen a emociones disfuncionales y a una conducta maladaptativa. Este carácter funcional/disfuncional de las emociones no se había utilizado nunca antes en el contexto del BDI. Además, el modelado conductual se ha ampliado con el modelado de estilos comunicativos, basado en el modelo Satir, tampoco aplicado previamente al modelado de agentes inteligentes. El modelo de Satir considera gestos corporales, expresiones faciales, voz, entonación y estructuras lingüísticas.• Se ha elegido un caso de uso, "I wish a had better news" para la aplicación del marco propuesto y se han realizado dos tipos de evaluaciones, por parte de expertos y de usuarios. La evaluación ha confirmado el gran potencial del marco propuesto para reproducir un comportamiento humano realista y creíble en situaciones complejas.<br /

    Human emotion simulation in a dynamic environment

    Get PDF
    The aim of this work is to contribute to the believability of the simulated emotions for virtual entities to allow them display human like features. Endowing virtual entities with such features requires an appropriate architecture and model. For that, a study of emotional models from different perspective is undertaken. The fields include Psychology, Organic Components, Attention study and Computing. Two contributions are provided to reach the aim. The first one is a computational emotional model based on Scherer’s theory (K. Scherer, 2001). This contribution allows to generate a series of modifications in the affective state from one event by contrast to the existing solutions where one emotion is mapped to one single event. Several theories are used to make the model concrete. The second contribution make use of attention theories to build a paradigm in the execution of tasks in parallel. An algorithm is proposed to assess the available resources and allocate them to tasks for their execution. The algorithm is based on the multiple resources theory by Wickens (Wickens, 2008). The two contributions are combined into one architecture to produce a dynamic emotional system that allows its components to work in parallel. The first contribution was evaluated using a questionnaire. The results showed that mapping one event into a series of modifications in the affective state can enhance the believability of the simulation. The results also showed that people who develop more variations in the affective state are more perceived to be feminine

    The Essence of Ethical Reasoning in Robot-Emotion Processing

    Full text link
    © 2017, Springer Science+Business Media B.V., part of Springer Nature. As social robots become more and more intelligent and autonomous in operation, it is extremely important to ensure that such robots act in socially acceptable manner. More specifically, if such an autonomous robot is capable of generating and expressing emotions of its own, it should also have an ability to reason if it is ethical to exhibit a particular emotional state in response to a surrounding event. Most existing computational models of emotion for social robots have focused on achieving a certain level of believability of the emotions expressed. We argue that believability of a robot’s emotions, although crucially necessary, is not a sufficient quality to elicit socially acceptable emotions. Thus, we stress on the need of higher level of cognition in emotion processing mechanism which empowers social robots with an ability to decide if it is socially appropriate to express a particular emotion in a given context or it is better to inhibit such an experience. In this paper, we present the detailed mathematical explanation of the ethical reasoning mechanism in our computational model, EEGS, that helps a social robot to reach to the most socially acceptable emotional state when more than one emotions are elicited by an event. Experimental results show that ethical reasoning in EEGS helps in the generation of believable as well as socially acceptable emotions

    Affect simulation with primary and secondary emotions

    Get PDF
    Becker-Asano C, Wachsmuth I. Affect simulation with primary and secondary emotions. In: Prendinger H, Lester J, Ishizuka M, eds. Intelligent Virtual Agents. LNCS 5208. Berlin: Springer; 2008: 15-28.In this paper the WASABI Affect Simulation Architecture is introduced, in which a virtual human’s cognitive reasoning capabilities are combined with simulated embodiment to achieve the simulation of primary and secondary emotions. In modeling primary emotions we follow the idea of “Core Affect” in combination with a continuous progression of bodily feeling in three-dimensional emotion space (PAD space), that is only subsequently categorized into discrete emotions. In humans, primary emotions are understood as onto-genetically earlier emotions, which directly influence facial expressions. Secondary emotions, in contrast, afford the ability to reason about current events in the light of experiences and expectations. By technically representing aspects of their connotative meaning in PAD space, we not only assure their mood-congruent elicitation, but also combine them with facial expressions, that are concurrently driven by the primary emotions. An empirical study showed that human players in the Skip-Bo scenario judge our virtual human MAX significantly older when secondary emotions are simulated in addition to primary ones

    Participant responses to virtual agents in immersive virtual environments.

    Get PDF
    This thesis is concerned with interaction between people and virtual humans in the context of highly immersive virtual environments (VEs). Empirical studies have shown that virtual humans (agents) with even minimal behavioural capabilities can have a significant emotional impact on participants of immersive virtual environments (IVEs) to the extent that these have been used in studies of mental health issues such as social phobia and paranoia. This thesis focuses on understanding the impact on the responses of people to the behaviour of virtual humans rather than their visual appearance. There are three main research questions addressed. First, the thesis considers what are the key nonverbal behavioural cues used to portray a specific psychological state. Second, research determines the extent to which the underlying state of a virtual human is recognisable through the display of a key set of cues inferred from the behaviour of real humans. Finally, the degree to which a perceived psychological state in a virtual human invokes responses from participants in immersive virtual environments that are similar to those observed in the physical world is considered. These research questions were investigated through four experiments. The first experiment focused on the impact of visual fidelity and behavioural complexity on participant responses by implementing a model of gaze behaviour in virtual humans. The results of the study concluded that participants expected more life-like behaviours from more visually realistic virtual humans. The second experiment investigated the detrimental effects on participant responses when interacting with virtual humans with low behavioural complexity. The third experiment investigated the differences in responses of participants to virtual humans perceived to be in varying emotional states. The emotional states of the virtual humans were portrayed using postural and facial cues. Results indicated that posture does play an important role in the portrayal of affect however the behavioural model used in the study did not fully cover the qualities of body movement associated with the emotions studied. The final experiment focused on the portrayal of affect through the quality of body movement such as the speed of gestures. The effectiveness of the virtual humans was gauged through exploring a variety of participant responses including subjective responses, objective physiological and behavioural measures. The results show that participants are affected and respond to virtual humans in a significant manner provided that an appropriate behavioural model is used

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing
    corecore