238 research outputs found

    Theme Preface: Mind Minding Agents

    Get PDF

    A Graphical Model of Hurricane Evacuation Behaviors

    Full text link
    Natural disasters such as hurricanes are increasing and causing widespread devastation. People's decisions and actions regarding whether to evacuate or not are critical and have a large impact on emergency planning and response. Our interest lies in computationally modeling complex relationships among various factors influencing evacuation decisions. We conducted a study on the evacuation of Hurricane Irma of the 2017 Atlantic hurricane season. The study was guided by the Protection motivation theory (PMT), a widely-used framework to understand people's responses to potential threats. Graphical models were constructed to represent the complex relationships among the factors involved and the evacuation decision. We evaluated different graphical structures based on conditional independence tests using Irma data. The final model largely aligns with PMT. It shows that both risk perception (threat appraisal) and difficulties in evacuation (coping appraisal) influence evacuation decisions directly and independently. Certain information received from media was found to influence risk perception, and through it influence evacuation behaviors indirectly. In addition, several variables were found to influence both risk perception and evacuation behaviors directly, including family and friends' suggestions, neighbors' evacuation behaviors, and evacuation notices from officials

    What's Next in Affective Modeling? Large Language Models

    Full text link
    Large Language Models (LLM) have recently been shown to perform well at various tasks from language understanding, reasoning, storytelling, and information search to theory of mind. In an extension of this work, we explore the ability of GPT-4 to solve tasks related to emotion prediction. GPT-4 performs well across multiple emotion tasks; it can distinguish emotion theories and come up with emotional stories. We show that by prompting GPT-4 to identify key factors of an emotional experience, it is able to manipulate the emotional intensity of its own stories. Furthermore, we explore GPT-4's ability on reverse appraisals by asking it to predict either the goal, belief, or emotion of a person using the other two. In general, GPT-4 can make the correct inferences. We suggest that LLMs could play an important role in affective modeling; however, they will not fully replace works that attempt to model the mechanisms underlying emotion-related processes

    Social decisions and fairness change when people’s interests are represented by autonomous agents

    Get PDF
    There has been growing interest on agents that represent people’s interests or act on their behalf such as automated negotiators, self-driving cars, or drones. Even though people will interact often with others via these agent representatives, little is known about whether people’s behavior changes when acting through these agents, when compared to direct interaction with others. Here we show that people’s decisions will change in important ways because of these agents; specifically, we showed that interacting via agents is likely to lead people to behave more fairly, when compared to direct interaction with others. We argue this occurs because programming an agent leads people to adopt a broader perspective, consider the other side’s position, and rely on social norms—such as fairness—to guide their decision making. To support this argument, we present four experiments: in Experiment 1 we show that people made fairer offers in the ultimatum and impunity games when interacting via agent representatives, when compared to direct interaction; in Experiment 2, participants were less likely to accept unfair offers in these games when agent representatives were involved; in Experiment 3, we show that the act of thinking about the decisions ahead of time—i.e., under the so-called “strategy method”—can also lead to increased fairness, even when no agents are involved; and, finally, in Experiment 4 we show that participants were less likely to reach an agreement with unfair counterparts in a negotiation setting. We discuss theoretical implications for our understanding of the nature of people’s social behavior with agent representatives, as well as practical implications for the design of agents that have the potential to increase fairness in society

    Automating the production of communicative gestures in embodied characters

    Get PDF
    In this paper we highlight the different challenges in modeling communicative gestures for Embodied Conversational Agents (ECAs). We describe models whose aim is to capture and understand the specific characteristics of communicative gestures in order to envision how an automatic communicative gesture production mechanism could be built. The work is inspired by research on how human gesture characteristics (e.g., shape of the hand, movement, orientation and timing with respect to the speech) convey meaning. We present approaches to computing where to place a gesture, which shape the gesture takes and how gesture shapes evolve through time. We focus on a particular model based on theoretical frameworks on metaphors and embodied cognition that argue that people can represent, reason about and convey abstract concepts using physical representations and processes, which can be conveyed through physical gestures

    Human cooperation when acting through autonomous machines

    Get PDF
    Recent times have seen an emergence of intelligent machines that act autonomously on our behalf, such as autonomous vehicles. Despite promises of increased efficiency, it is not clear whether this paradigm shift will change how we decide when our self-interest (e.g., comfort) is pitted against the collective interest (e.g., environment). Here we show that acting through machines changes the way people solve these social dilemmas and we present experimental evidence showing that participants program their autonomous vehicles to act more cooperatively than if they were driving themselves. We show that this happens because programming causes selfish short-term rewards to become less salient, leading to considerations of broader societal goals. We also show that the programmed behavior is influenced by past experience. Finally, we report evidence that the effect generalizes beyond the domain of autonomous vehicles. We discuss implications for designing autonomous machines that contribute to a more cooperative society

    Sharing emotions and space - empathy as a basis for cooperative spatial interaction

    Get PDF
    Boukricha H, Nguyen N, Wachsmuth I. Sharing emotions and space - empathy as a basis for cooperative spatial interaction. In: Kopp S, Marsella S, Thorisson K, Vilhjalmsson HH, eds. Proceedings of the 11th International Conference on Intelligent Virtual Agents (IVA 2011). LNAI. Vol 6895. Berlin, Heidelberg: Springer; 2011: 350-362.Empathy is believed to play a major role as a basis for humans’ cooperative behavior. Recent research shows that humans empathize with each other to different degrees depending on several modulation factors including, among others, their social relationships, their mood, and the situational context. In human spatial interaction, partners share and sustain a space that is equally and exclusively reachable to them, the so-called interaction space. In a cooperative interaction scenario of relocating objects in interaction space, we introduce an approach for triggering and modulating a virtual humans cooperative spatial behavior by its degree of empathy with its interaction partner. That is, spatial distances like object distances as well as distances of arm and body movements while relocating objects in interaction space are modulated by the virtual human’s degree of empathy. In this scenario, the virtual human’s empathic emotion is generated as a hypothesis about the partner’s emotional state as related to the physical effort needed to perform a goal directed spatial behavior

    A computational model of coping and decision making in high-stress, uncertain situations: an application to hurricane evacuation decisions

    Get PDF
    People often encounter highly stressful, emotion-evoking situations. Modeling and predicting people's behavior in such situations, how they cope, is a critical research topic. To that end, we propose a computational model of coping that casts Lazarus' theory of coping into a Partially Observable Markov Decision Process (POMDP) framework. This includes an appraisal process that models the factors leading to stress by assessing a person's relation to the environment and a coping process that models how people seek to reduce stress by directly altering the environment or changing one's beliefs and goals. We evaluated the model's assumptions in the context of a high-stress situation, hurricanes. We collected questionnaire data from major U.S. hurricanes in 2018 to evaluate the model's features for appraisal calculation. We also conducted a series of controlled experiments simulating a hurricane experience to investigate how people change their beliefs and goals to cope with the situation. The results support the model's assumptions showing that the proposed features are significantly associated with the evacuation decisions and people change their beliefs and goals to cope with the situation

    Encoding Theory of Mind in Character Design for Pedagogical Interactive Narrative

    Get PDF
    Computer aided interactive narrative allows people to participate actively in a dynamically unfolding story, by playing a character or by exerting directorial control. Because of its potential for providing interesting stories as well as allowing user interaction, interactive narrative has been recognized as a promising tool for providing both education and entertainment. This paper discusses the challenges in creating interactive narratives for pedagogical applications and how the challenges can be addressed by using agent-based technologies. We argue that a rich model of characters and in particular a Theory of Mind capacity are needed. The character architect in the Thespian framework for interactive narrative is presented as an example of how decision-theoretic agents can be used for encoding Theory of Mind and for creating pedagogical interactive narratives
    corecore