2,374 research outputs found

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Confirmation Report: Modelling Interlocutor Confusion in Situated Human Robot Interaction

    Get PDF
    Human-Robot Interaction (HRI) is an important but challenging field focused on improving the interaction between humans and robots such to make the interaction more intelligent and effective. However, building a natural conversational HRI is an interdisciplinary challenge for scholars, engineers, and designers. It is generally assumed that the pinnacle of human- robot interaction will be having fluid naturalistic conversational interaction that in important ways mimics that of how humans interact with each other. This of course is challenging at a number of levels, and in particular there are considerable difficulties when it comes to naturally monitoring and responding to the user’s mental state. On the topic of mental states, one field that has received little attention to date is moni- toring the user for possible confusion states. Confusion is a non-trivial mental state which can be seen as having at least two substates. There two confusion states can be thought of as being associated with either negative or positive emotions. In the former, when people are productively confused, they have a passion to solve any current difficulties. Meanwhile, people who are in unproductive confusion may lose their engagement and motivation to overcome those difficulties, which in turn may even lead them to drop the current conversation. While there has been some research on confusion monitoring and detection, it has been limited with the most focused on evaluating confusion states in online learning tasks. The central hypothesis of this research is that the monitoring and detection of confusion states in users is essential to fluid task-centric HRI and that it should be possible to detect such confusion and adjust policies to mitigate the confusion in users. In this report, I expand on this hypothesis and set out several research questions. I also provide a comprehensive literature review before outlining work done to date towards my research hypothesis, I also set out plans for future experimental work

    A conceptual affective design framework for the use of emotions in computer game design

    Get PDF
    The purpose of this strategy of inquiry is to understand how emotions influence gameplay and to review contemporary techniques to design for them in the aim of devising a model that brings current disparate parts of the game design process together. Emotions sit at the heart of a game player’s level of engagement. They are evoked across many of the components that facilitate gameplay including the interface, the player’s avatar, non-player characters and narrative. Understanding the role of emotion in creating truly immersive and believable environments is critical for game designers. After discussing a taxonomy of emotion, this paper will present a systematic literature review of designing for emotion in computer games. Following this, a conceptual framework for affective design is offered as a guide for the future of computer game design

    Virtual environments promoting interaction

    Get PDF
    Virtual reality (VR) has been widely researched in the academic environment and is now breaking into the industry. Regular companies do not have access to this technology as a collaboration tool because these solutions usually require specific devices that are not at hand of the common user in offices. There are other collaboration platforms based on video, speech and text, but VR allows users to share the same 3D space. In this 3D space there can be added functionalities or information that in a real-world environment would not be possible, something intrinsic to VR. This dissertation has produced a 3D framework that promotes nonverbal communication. It plays a fundamental role on human interaction and is mostly based on emotion. In the academia, confusion is known to influence learning gains if it is properly managed. We designed a study to evaluate how lexical, syntactic and n-gram features influence perceived confusion and found results (not statistically significant) that point that it is possible to build a machine learning model that can predict the level of confusion based on these features. This model was used to manipulate the script of a given presentation, and user feedback shows a trend that by manipulating these features and theoretically lowering the level of confusion on text not only drops the reported confusion, as it also increases reported sense of presence. Another contribution of this dissertation comes from the intrinsic features of a 3D environment where one can carry actions that in a real world are not possible. We designed an automatic adaption lighting system that reacts to the perceived user’s engagement. This hypothesis was partially refused as the results go against what we hypothesized but do not have statistical significance. Three lines of research may stem from this dissertation. First, there can be more complex features to train the machine learning model such as syntax trees. Also, on an Intelligent Tutoring System this could adjust the avatar’s speech in real-time if fed by a real-time confusion detector. When going for a social scenario, the set of basic emotions is well-adjusted and can enrich them. Facial emotion recognition can extend this effect to the avatar’s body to fuel this synchronization and increase the sense of presence. Finally, we based this dissertation on the premise of using ubiquitous devices, but with the rapid evolution of technology we should consider that new devices will be present on offices. This opens new possibilities for other modalities.A Realidade Virtual (RV) tem sido alvo de investigação extensa na academia e tem vindo a entrar na indústria. Empresas comuns não têm acesso a esta tecnologia como uma ferramenta de colaboração porque estas soluções necessitam de dispositivos específicos que não estão disponíveis para o utilizador comum em escritório. Existem outras plataformas de colaboração baseadas em vídeo, voz e texto, mas a RV permite partilhar o mesmo espaço 3D. Neste espaço podem existir funcionalidades ou informação adicionais que no mundo real não seria possível, algo intrínseco à RV. Esta dissertação produziu uma framework 3D que promove a comunicação não-verbal que tem um papel fundamental na interação humana e é principalmente baseada em emoção. Na academia é sabido que a confusão influencia os ganhos na aprendizagem quando gerida adequadamente. Desenhámos um estudo para avaliar como as características lexicais, sintáticas e n-gramas influenciam a confusão percecionada. Construímos e testámos um modelo de aprendizagem automática que prevê o nível de confusão baseado nestas características, produzindo resultados não estatisticamente significativos que suportam esta hipótese. Este modelo foi usado para manipular o texto de uma apresentação e o feedback dos utilizadores demonstra uma tendência na diminuição do nível de confusão reportada no texto e aumento da sensação de presença. Outra contribuição vem das características intrínsecas de um ambiente 3D onde se podem executar ações que no mundo real não seriam possíveis. Desenhámos um sistema automático de iluminação adaptativa que reage ao engagement percecionado do utilizador. Os resultados não suportam o que hipotetizámos mas não têm significância estatística, pelo que esta hipótese foi parcialmente rejeitada. Três linhas de investigação podem provir desta dissertação. Primeiro, criar características mais complexas para treinar o modelo de aprendizagem, tais como árvores de sintaxe. Além disso, num Intelligent Tutoring System este modelo poderá ajustar o discurso do avatar em tempo real, alimentado por um detetor de confusão. As emoções básicas ajustam-se a um cenário social e podem enriquecê-lo. A emoção expressada facialmente pode estender este efeito ao corpo do avatar para alimentar o sincronismo social e aumentar a sensação de presença. Finalmente, baseámo-nos em dispositivos ubíquos, mas com a rápida evolução da tecnologia, podemos considerar que novos dispositivos irão estar presentes em escritórios. Isto abre possibilidades para novas modalidades

    Cohousing IoT:Technology Design for Life In Community

    Get PDF
    This paper presents a research-through-design project to develop and interpret speculative smart home technologies for cohousing communities—Cohousing IoT. Fieldwork at multiple sites coupled to a constructive design research process led to three prototypes designed for cohousing communities: Cohousing Radio, Physical RSVP, and Participation Scales. These were brought back to the communities that inspired them as a form of evaluation, but also to generate new understandings of designing for cohousing. In discussing how they understand these prototypes, this paper offers an account of how research though design generates knowledge that is specific to the conditions and issues that matter to communities. This contributes to design research more broadly in two ways. First, it demonstrates how contemporary ideas of smart home technology are or could be made relevant to broader ways of living in the future. Second, it provides an example of how a design research process can serve to uncover community values, issues, and goals

    E-Drama: Facilitating Online Role-play using an AI Actor and Emotionally Expressive Characters.

    Get PDF
    This paper describes a multi-user role-playing environment, e-drama, which enables groups of people to converse online, in scenario driven virtual environments. The starting point of this research – edrama – is a 2D graphical environment in which users are represented by static cartoon figures. An application has been developed to enable integration of the existing edrama tool with several new components to support avatars with emotionally expressive behaviours, rendered in a 3D environment. The functionality includes the extraction of affect from open-ended improvisational text. The results of the affective analysis are then used to: (a) control an automated improvisational AI actor – EMMA (emotion, metaphor and affect) that operates a bit-part character in the improvisation; (b) drive the animations of avatars using the Demeanour framework in the user interface so that they react bodily in ways that are consistent with the affect that they are expressing. Finally, we describe user trials that demonstrate that the changes made improve the quality of social interaction and users’ sense of presence. Moreover, our system has the potential to evolve normal classroom education for young people with or without learning disabilities by providing 24/7 efficient personalised social skill, language and career training via role-play and offering automatic monitoring

    Emotions in context: examining pervasive affective sensing systems, applications, and analyses

    Get PDF
    Pervasive sensing has opened up new opportunities for measuring our feelings and understanding our behavior by monitoring our affective states while mobile. This review paper surveys pervasive affect sensing by examining and considering three major elements of affective pervasive systems, namely; “sensing”, “analysis”, and “application”. Sensing investigates the different sensing modalities that are used in existing real-time affective applications, Analysis explores different approaches to emotion recognition and visualization based on different types of collected data, and Application investigates different leading areas of affective applications. For each of the three aspects, the paper includes an extensive survey of the literature and finally outlines some of challenges and future research opportunities of affective sensing in the context of pervasive computing
    corecore