8 research outputs found

    Emergent social NPC interactions in the Social NPCs Skyrim mod and beyond

    Full text link
    This work presents an implementation of a social architecture model for authoring Non-Player Character (NPC) in open world games inspired in academic research on agentbased modeling. Believable NPC authoring is burdensome in terms of rich dialogue and responsive behaviors. We briefly present the characteristics and advantages of using a social agent architecture for this task and describe an implementation of a social agent architecture CiF-CK released as a mod Social NPCs for The Elder Scrolls V: SkyrimComment: Originally a chapter for Game AI Pro, contains 14 pages, 3 figure

    Cognitive Architectures for Serious Games

    Get PDF
    This dissertation summarises a research path aimed at fostering the use of Cognitive Architectures in Serious Games research field. Cognitive Architectures are an embodiment of scientific hypotheses and theories aimed at capturing the mechanisms of cognition that are considered consistent over time and independent of specific tasks or domains. The theoretical approaches provided by the research in computational cognitive modelling have been used to formalise a methodological framework to guide researchers and experts in the game-based education sector in designing, implementing, and evaluating Serious Games. The investigation of cognitive processes involved during the game experience represents the fundamental step of the pro- posed approach. Two different case studies are described to discuss the possible use of the suggested framework. In the first case study, the aim was to design a modified version of the Tetris game with the intention of making the game more effective in training the visual-spatial skill called mental rotation. In the second scenario, the frame- work was used as a basis for creating an innovative persuasive game. This case study provides an example of adopting cognitive architectures for implementing a non-player character with human-like behaviour developed using targeted cognitive theories

    Operacionalização de Técnicas de Mudança Comportamental em Agentes Conversacionais

    Get PDF
    Trabalho de projeto do mestrado, Informática, Universidade de Lisboa, Faculdade de Ciências, 2022As intervenções no âmbito de mudança comportamental concretizadas através de aplicações de saúde eletrónica e móvel (eHealth e mHealth) estão a revolucionar as maneiras pelas quais os indivíduos podem monitorizar e melhorar os seus comportamentos e cuidados de saúde, melhorando os resultados, a experiência do paciente, e reduzindo custos. Partindo do trabalho realizado no âmbito do projeto VASelfCare, focado no uso de técnicas de mudança comportamental bem estabelecidas numa intervenção mHealth baseada num agente conversacional, este trabalho propõe uma nova arquitetura para o design de agentes conversacionais no âmbito de uma intervenção de mudança comportamental. Esta nova abordagem procura ultrapassar algumas das limitações apresentadas pelo agente precedente, combinando o uso de uma plataforma de linguagem natural avançada (Dialogflow) com a representação explícita, numa ontologia, de como as técnicas de mudança comportamental podem ser operacionalizadas. E exposto o modo de conceção e integração destes dois componentes no sistema, bem como o aspeto mais desafiador de utilizar os recursos avançados da plataforma de forma que permita ao agente conduzir o fluxo do diálogo e recorrer ao módulo de conhecimento externo, quando necessário. Foi construída uma prova de conceito bem-sucedida, que pretende servir de base para o desenvolvimento de agentes conversacionais avançados, combinando ferramentas de linguagem natural com representação de conhecimento baseado numa ontologia.Behavior change interventions delivered through eHealth and mHealth applications are revolutionizing the ways in which individuals can monitor and improve their behaviors and health care, improving outcomes, the overall patient experience, and reducing costs. Based on the work carried out within the scope of the VASelfCare project, focused on the use of well-established behavior change techniques in an mHealth intervention based on a conversational agent, this work proposes a new architecture for the design of conversational agents within the scope of a behavior change intervention. This new approach seeks to overcome some of the limitations presented by the previous agent, combining the use of an advanced natural language platform (Dialogflow) with the explicit representation, in an ontology, of how behavior change techniques can be operationalized. It’s exposed the process of designing and integrating these two components into the system, as well as the most challenging aspect of using the advanced features of the platform in a way that would allow the agent to conduct the dialogue flow and use the external knowledge module, when necessary. A successful proof of concept was built, which aims to serve as base for the development of advanced conversational agents, combining natural language tools with ontology based knowledge representation

    Social Interactions in Immersive Virtual Environments: People, Agents, and Avatars

    Get PDF
    Immersive virtual environments (IVEs) have received increased popularity with applications in many fields. IVEs aim to approximate real environments, and to make users react similarly to how they would in everyday life. An important use case is the users-virtual characters (VCs) interaction. We interact with other people every day, hence we expect others to appropriately act and behave, verbally and non-verbally (i.e., pitch, proximity, gaze, turn-taking). These expectations also apply to interactions with VCs in IVEs, and this thesis tackles some of these aspects. We present three projects that inform the area of social interactions with a VC in IVEs, focusing on non-verbal behaviours. In our first study on interactions between people, we collaborated with the Social Neuroscience group at the Institute of Cognitive Neuroscience from UCL on a dyad multi-modal interaction. This aims to understand the conversation dynamics, focusing on gaze and turn-taking. The results show that people have a higher frequency of gaze change (from averted to direct and vice versa) when they are being looked at compared to when they are not. When they are not being looked at, they are also directing their gaze to their partners more compared to when they are being looked at. Another contribution of this work is the automated method of annotating speech and gaze data. Next, we consider agents’ higher-level non-verbal behaviours, covering social attitudes. We present a pipeline to collect data and train a machine learning (ML) model that detects social attitudes in a user-VC interaction. Here we collaborated with two game studios: Dream Reality Interaction and Maze Theory. We present a case study for the ML pipeline on social engagement recognition for the Peaky Blinders narrative VR game from Maze Theory studio. We use a reinforcement learning algorithm with imitation learning rewards and a temporal memory element. The results show that the model trained with raw data does not generalise and performs worse (60% accuracy) than the one trained with socially meaningful data (83% accuracy). In IVEs, people embody avatars and their appearance can impact social interactions. In collaboration with Microsoft Research, we report a longitudinal study in mixed-reality on avatar appearance in real-work meetings between co-workers comparing personalised full-body realistic and cartoon avatars. The results imply that when participants use realistic avatars first, they may have higher expectations and they perceive their colleagues’ emotional states with less accuracy. Participants may also become more accustomed to cartoon avatars as time passes and the overall use of avatars may lead to less accurately perceiving negative emotions. The work presented here contributes towards the field of detecting and generating nonverbal cues for VCs in IVEs. These are also important building blocks for creating autonomous agents for IVEs. Additionally, this work contributes to the games and work industry fields through an immersive ML pipeline for detecting social attitudes and through insights into using different avatar styles over time in real-world meetings

    Affective reactions towards socially interactive agents and their computational modeling

    Get PDF
    Over the past 30 years, researchers have studied human reactions towards machines applying the Computers Are Social Actors paradigm, which contrasts reactions towards computers with reactions towards humans. The last 30 years have also seen improvements in technology that have led to tremendous changes in computer interfaces and the development of Socially Interactive Agents. This raises the question of how humans react to Socially Interactive Agents. To answer these questions, knowledge from several disciplines is required, which is why this interdisciplinary dissertation is positioned within psychology and computer science. It aims to investigate affective reactions to Socially Interactive Agents and how these can be modeled computationally. Therefore, after a general introduction and background, this thesis first provides an overview of the Socially Interactive Agent system used in this work. Second, it presents a study comparing a human and a virtual job interviewer, which shows that both interviewers induce shame in participants to the same extent. Thirdly, it reports on a study investigating obedience towards Socially Interactive Agents. The results indicate that participants obey human and virtual instructors in similar ways. Furthermore, both types of instructors evoke feelings of stress and shame to the same extent. Fourth, a stress management training using biofeedback with a Socially Interactive Agent is presented. The study shows that a virtual trainer can teach coping techniques for emotionally challenging social situations. Fifth, it introduces MARSSI, a computational model of user affect. The evaluation of the model shows that it is possible to relate sequences of social signals to affective reactions, taking into account emotion regulation processes. Finally, the Deep method is proposed as a starting point for deeper computational modeling of internal emotions. The method combines social signals, verbalized introspection information, context information, and theory-driven knowledge. An exemplary application to the emotion shame and a schematic dynamic Bayesian network for its modeling are illustrated. Overall, this thesis provides evidence that human reactions towards Socially Interactive Agents are very similar to those towards humans, and that it is possible to model these reactions computationally.In den letzten 30 Jahren haben Forschende menschliche Reaktionen auf Maschinen untersucht und dabei das “Computer sind soziale Akteure”-Paradigma genutzt, in dem Reaktionen auf Computer mit denen auf Menschen verglichen werden. In den letzten 30 Jahren hat sich ebenfalls die Technologie weiterentwickelt, was zu einer enormen Veränderung der Computerschnittstellen und der Entwicklung von sozial interaktiven Agenten geführt hat. Dies wirft Fragen zu menschlichen Reaktionen auf sozial interaktive Agenten auf. Um diese Fragen zu beantworten, ist Wissen aus mehreren Disziplinen erforderlich, weshalb diese interdisziplinäre Dissertation innerhalb der Psychologie und Informatik angesiedelt ist. Sie zielt darauf ab, affektive Reaktionen auf sozial interaktive Agenten zu untersuchen und zu erforschen, wie diese computational modelliert werden können. Nach einer allgemeinen Einführung in das Thema gibt diese Arbeit daher, erstens, einen Überblick über das Agentensystem, das in der Arbeit verwendet wird. Zweitens wird eine Studie vorgestellt, in der eine menschliche und eine virtuelle Jobinterviewerin miteinander verglichen werden, wobei sich zeigt, dass beide Interviewerinnen bei den Versuchsteilnehmenden Schamgefühle in gleichem Maße auslösen. Drittens wird eine Studie berichtet, in der Gehorsam gegenüber sozial interaktiven Agenten untersucht wird. Die Ergebnisse deuten darauf hin, dass Versuchsteilnehmende sowohl menschlichen als auch virtuellen Anleiterinnen ähnlich gehorchen. Darüber hinaus werden durch beide Instruktorinnen gleiche Maße von Stress und Scham hervorgerufen. Viertens wird ein Biofeedback-Stressmanagementtraining mit einer sozial interaktiven Agentin vorgestellt. Die Studie zeigt, dass die virtuelle Trainerin Techniken zur Bewältigung von emotional herausfordernden sozialen Situationen vermitteln kann. Fünftens wird MARSSI, ein computergestütztes Modell des Nutzeraffekts, vorgestellt. Die Evaluation des Modells zeigt, dass es möglich ist, Sequenzen von sozialen Signalen mit affektiven Reaktionen unter Berücksichtigung von Emotionsregulationsprozessen in Beziehung zu setzen. Als letztes wird die Deep-Methode als Ausgangspunkt für eine tiefer gehende computergestützte Modellierung von internen Emotionen vorgestellt. Die Methode kombiniert soziale Signale, verbalisierte Introspektion, Kontextinformationen und theoriegeleitetes Wissen. Eine beispielhafte Anwendung auf die Emotion Scham und ein schematisches dynamisches Bayes’sches Netz zu deren Modellierung werden dargestellt. Insgesamt liefert diese Arbeit Hinweise darauf, dass menschliche Reaktionen auf sozial interaktive Agenten den Reaktionen auf Menschen sehr ähnlich sind und dass es möglich ist diese menschlichen Reaktion computational zu modellieren.Deutsche Forschungsgesellschaf
    corecore