1,183 research outputs found
Game-inspired Pedagogical Conversational Agents: A Systematic Literature Review
Pedagogical conversational agents (PCAs) are an innovative way to help learners improve their academic performance via intelligent dialog systems. However, PCAs have not yet reached their full potential. They often fail because users perceive conversations with them as not engaging. Enriching them with game-based approaches could contribute to mitigating this issue. One could enrich a PCA with game-based approaches by gamifying it to foster positive effects, such as fun and motivation, or by integrating it into a game-based learning (GBL) environment to promote effects such as social presence and enable individual learning support. We summarize PCAs that are combined with game-based approaches under the novel term âgame-inspired PCAsâ. We conducted a systematic literature review on this topic, as previous literature reviews on PCAs either have not combined the topics of PCAs and GBL or have done so to a limited extent only. We analyzed the literature regarding the existing design knowledge base, the game elements used, the thematic areas and target groups, the PCA roles and types, the extent of artificial intelligence (AI) usage, and opportunities for adaptation. We reduced the initial 3,034 records to 50 fully coded papers, from which we derived a morphological box and revealed current research streams and future research recommendations. Overall, our results show that the topic offers promising application potential but that scholars and practitioners have not yet considered it holistically. For instance, we found that researchers have rarely provided prescriptive design knowledge, have not sufficiently combined game elements, and have seldom used AI algorithms as well as intelligent possibilities of user adaptation in PCA development. Furthermore, researchers have scarcely considered certain target groups, thematic areas, and PCA roles. Consequently, our paper contributes to research and practice by addressing research gaps and structuring the existing knowledge base
Voice interfaces in everyday life
Voice User Interfaces (VUIs) are becoming ubiquitously available, being embedded both into everyday mobility via smartphones, and into the life of the home via âassistantâ devices. Yet, exactly how users of such devices practically thread that use into their everyday social interactions remains underexplored. By collecting and studying audio data from month-long deployments of the Amazon Echo in participantsâ homesâinformed by ethnomethodology and conversation analysisâour study documents the methodical practices of VUI users, and how that use is accomplished in the complex social life of the home. Data we present shows how the device is made accountable to and embedded into conversational settings like family dinners where various simultaneous activities are being achieved. We discuss how the VUI is finely coordinated with the sequential organisation of talk. Finally, we locate implications for the accountability of VUI interaction, request and response design, and raise conceptual challenges to the notion of designing âconversationalâ interfaces
Recommended from our members
Effective Tutoring with Empathic Embodied Conversational Agents
This thesis examines the prospect of using empathy in an Embodied Tutoring System (ETS) that guides students through an online quiz (by providing feedback on student answers and responding to self-reported student emotion). The ETS seeks to imitate human behaviours successfully used in one-to-one human tutorial interactions. The main hypothesis is that the interaction with an empathic ETS results in greater learning gains than a neutral ETS, primarily by encouraging positive and reducing negative student emotions using empathic feedback.
In a preparatory study we investigated different strategies for expressing emotion by the ETS. We established that a multimodal strategy achieves the best results regarding how accurately human participants can recognise the emotions. This approach was used in developing the feedback strategy for our empathic ETS.
The preparatory study was followed by two studies in which we compared a neutral with an empathic ETS. The ETS in the second of these studies was developed using results from the first of these studies. In both studies, we found no statistically significant difference in learning gains between the neutral and empathic ETS. However, we did discover a number of interactions between the ETS system, learning gains and, in particular 1) student scores on an empathic tendency test and 2) student ability. We also analysed the subjective responses and the relation between self-reported emotions during the quiz and student learning gains.
Based on our studies in a formal class room setting, we assess the prospects of using empathic agents in a classroom setting and describe a number of requirements for their effective use
A Proposal to Create Learning Environments in Virtual Worlds Integrating Advanced Educative Resources
Social Networking has been a global consumer phenomenon during the last few years. Online communities are changing the way people behave, share and interact within their daily lives. Most of such communities are mainly focused on sharing contents and communicating using a traditional web interface. However, social virtual worlds are computer-simulated environments that the users can "inhabit" and in which they can interact and create objects. Education is one of the most interesting applications of virtual worlds, as their flexibility can be exploited in order to create heterogeneous groups from all over the world who can collaborate synchronously in different virtual spaces. In this paper, we highlight the potential of virtual worlds as an educative tool and propose a model to create learning environments within Second Life or OpenSimulator combining the Moodle learning management system, embodied conversational metabots, and programmable 3D objects. We have implemented the proposal in a learning system for several subjects of the Computer Science degree in our university and show that it fostered engagement and collaboration and helped the students to better understand complex concepts.Research funded by projects CICYT TIN 2011-28620-C02-01, CICYT TEC 2011-28626 C02-02, CAM CONTEXTS (S2009/TIC-1485), and DPS 2008-07029-C02-02.Enviad
Trust me on this one: conforming to conversational assistants
Conversational artificial agents and artificially intelligent (AI) voice assistants are becoming increasingly popular. Digital virtual assistants such as Siri, or conversational devices such as Amazon Echo or Google Home are permeating everyday life, and are designed to be more and more humanlike in their speech. This study investigates the effect this can have on oneâs conformity with an AI assistant. In the 1950s, Solomon Aschâs already demonstrated the power and danger of conformity amongst people. In these classical experiments test persons were asked to answer relatively simple questions, whilst others pretending to be participants tried to convince the test person to give wrong answers. These studies were later replicated with embodied robots, but these physical robots are still rare. In light of our increasing reliance on AI assistants, this study investigates to what extent an individual will conform to a disembodied virtual assistant. We also investigate if there is a difference between a group that interacts with an assistant that communicates through text, one that has a robotic voice and one that has a humanlike voice. The assistant attempts to subtly influence participantsâ final responses in a general knowledge quiz, and we measure how often participants change their answer after having been given advice. Results show that participants conformed significantly more often to the assistant with a human voice than the one that communicated through text.Computer Systems, Imagery and MediaAlgorithms and the Foundations of Software technolog
Affective Expressions in Conversational Agents for Learning Environments: Effects of curiosity, humour, and expressive auditory gestures
Conversational agents -- systems that imitate natural language discourse -- are becoming an increasingly prevalent human-computer interface, being employed in various domains including healthcare, customer service, and education. In education, conversational agents, also known as pedagogical agents, can be used to encourage interaction; which is considered crucial for the learning process. Though pedagogical agents have been designed for learners of diverse age groups and subject matter, they retain the overarching goal of eliciting learning outcomes, which can be broken down into cognitive, skill-based, and affective outcomes. Motivation is a particularly important affective outcome, as it can influence what, when, and how we learn. Understanding, supporting, and designing for motivation is therefore of great importance for the advancement of learning technologies.
This thesis investigates how pedagogical agents can promote motivation in learners. Prior research has explored various features of the design of pedagogical agents and what effects they have on learning outcomes, and suggests that agents using social cues can adapt the learning environment to enhance both affective and cognitive outcomes. One social cue that is suggested to be of importance for enhancing learner motivation is the expression or simulation of affect in the agent. Informed by research and theory across multiple domains, three affective expressions are investigated: curiosity, humour, and expressive auditory gestures -- each aimed at enhancing motivation by adapting the learning environment in different ways, i.e., eliciting contagion effects, creating a positive learning experience, and strengthening the learner-agent relationship, respectively.
Three studies are presented in which each expression was implemented in a separate type of agent: physically-embodied, text-based, and voice-based; with all agents taking on the role of a companion or less knowledgeable peer to the learner. The overall focus is on how each expression can be displayed, what the effects are on perception of the agent, and how it influences behaviour and learning outcomes. The studies result in theoretical contributions that add to our understanding of conversational agent design for learning environments. The findings provide support for: the simulation of curiosity, the use of certain humour styles, and the addition of expressive auditory gestures, in enhancing motivation in learners interacting with conversational agents; as well as indicating a need for further exploration of these strategies in future work
Artificial Intelligence: Robots, Avatars, and the Demise of the Human Mediator
Published in cooperation with the American Bar Association Section of Dispute Resolutio
Artificial Intelligence: Robots, Avatars, and the Demise of the Human Mediator
Published in cooperation with the American Bar Association Section of Dispute Resolutio
Artificial Intelligence: Robots, Avatars and the Demise of the Human Mediator
As technology has advanced, many have wondered whether (or simply when) artificial intelligent devices will replace the humans who perform complex, interactive, interpersonal tasks such as dispute resolution. Has science now progressed to the point that artificial intelligence devices can replace human mediators, arbitrators, dispute resolvers and problem solvers? Can humanoid robots, attractive avatars and other relational agents create the requisite level of trust and elicit the truthful, perhaps intimate or painful, disclosures often necessary to resolve a dispute or solve a problem? This article will explore these questions. Regardless of whether the reader is convinced that the demise of the human mediator or arbitrator is imminent, one cannot deny that artificial intelligence now has the capability to assume many of the responsibilities currently being performed by alternative dispute resolution (ADR) practitioners. It is fascinating (and perhaps unsettling) to realize the complexity and seriousness of tasks currently delegated to avatars and robots. This article will review some of those delegations and suggest how the artificial intelligence developed to complete those assignments may be relevant to dispute resolution and problem solving. âRelational Agents,â which can have a physical presence such as a robot, be embodied in an avatar, or have no detectable form whatsoever and exist only as software, are able to create long term socio-economic relationships with users built on trust, rapport and therapeutic goals. Relational agents are interacting with humans in circumstances that have significant consequences in the physical world. These interactions provide insights as to how robots and avatars can participate productively in dispute resolution processes. Can human mediators and arbitrators be replaced by robots and avatars that not only physically resemble humans, but also act, think, and reason like humans? And to raise a particularly interesting question, can robots, avatars and other relational agents look, move, act, think, and reason even âbetterâ than humans
- âŠ