3,036 research outputs found

    Affective Expressions in Conversational Agents for Learning Environments: Effects of curiosity, humour, and expressive auditory gestures

    Get PDF
    Conversational agents -- systems that imitate natural language discourse -- are becoming an increasingly prevalent human-computer interface, being employed in various domains including healthcare, customer service, and education. In education, conversational agents, also known as pedagogical agents, can be used to encourage interaction; which is considered crucial for the learning process. Though pedagogical agents have been designed for learners of diverse age groups and subject matter, they retain the overarching goal of eliciting learning outcomes, which can be broken down into cognitive, skill-based, and affective outcomes. Motivation is a particularly important affective outcome, as it can influence what, when, and how we learn. Understanding, supporting, and designing for motivation is therefore of great importance for the advancement of learning technologies. This thesis investigates how pedagogical agents can promote motivation in learners. Prior research has explored various features of the design of pedagogical agents and what effects they have on learning outcomes, and suggests that agents using social cues can adapt the learning environment to enhance both affective and cognitive outcomes. One social cue that is suggested to be of importance for enhancing learner motivation is the expression or simulation of affect in the agent. Informed by research and theory across multiple domains, three affective expressions are investigated: curiosity, humour, and expressive auditory gestures -- each aimed at enhancing motivation by adapting the learning environment in different ways, i.e., eliciting contagion effects, creating a positive learning experience, and strengthening the learner-agent relationship, respectively. Three studies are presented in which each expression was implemented in a separate type of agent: physically-embodied, text-based, and voice-based; with all agents taking on the role of a companion or less knowledgeable peer to the learner. The overall focus is on how each expression can be displayed, what the effects are on perception of the agent, and how it influences behaviour and learning outcomes. The studies result in theoretical contributions that add to our understanding of conversational agent design for learning environments. The findings provide support for: the simulation of curiosity, the use of certain humour styles, and the addition of expressive auditory gestures, in enhancing motivation in learners interacting with conversational agents; as well as indicating a need for further exploration of these strategies in future work

    "Teach AI How to Code": Using Large Language Models as Teachable Agents for Programming Education

    Full text link
    This work investigates large language models (LLMs) as teachable agents for learning by teaching (LBT). LBT with teachable agents helps learners identify their knowledge gaps and discover new knowledge. However, teachable agents require expensive programming of subject-specific knowledge. While LLMs as teachable agents can reduce the cost, LLMs' over-competence as tutees discourages learners from teaching. We propose a prompting pipeline that restrains LLMs' competence and makes them initiate "why" and "how" questions for effective knowledge-building. We combined these techniques into TeachYou, an LBT environment for algorithm learning, and AlgoBo, an LLM-based tutee chatbot that can simulate misconceptions and unawareness prescribed in its knowledge state. Our technical evaluation confirmed that our prompting pipeline can effectively configure AlgoBo's problem-solving performance. Through a between-subject study with 40 algorithm novices, we also observed that AlgoBo's questions led to knowledge-dense conversations (effect size=0.73). Lastly, we discuss design implications, cost-efficiency, and personalization of LLM-based teachable agents

    Measuring What Matters Most

    Get PDF
    An argument that choice-based, process-oriented educational assessments are more effective than static assessments of fact retrieval.If a fundamental goal of education is to prepare students to act independently in the world—in other words, to make good choices—an ideal educational assessment would measure how well we are preparing students to do so. Current assessments, however, focus almost exclusively on how much knowledge students have accrued and can retrieve. In Measuring What Matters Most, Daniel Schwartz and Dylan Arena argue that choice should be the interpretive framework within which learning assessments are organized. Digital technologies, they suggest, make this possible; interactive assessments can evaluate students in a context of choosing whether, what, how, and when to learn.Schwartz and Arena view choice not as an instructional ingredient to improve learning but as the outcome of learning. Because assessments shape public perception about what is useful and valued in education, choice-based assessments would provide a powerful lever in this reorientation in how people think about learning.Schwartz and Arena consider both theoretical and practical matters. They provide an anchoring example of a computerized, choice-based assessment, argue that knowledge-based assessments are a mismatch for our educational aims, offer concrete examples of choice-based assessments that reveal what knowledge-based assessments cannot, and analyze the practice of designing assessments. Because high variability leads to innovation, they suggest democratizing assessment design to generate as many instances as possible. Finally, they consider the most difficult aspect of assessment: fairness. Choice-based assessments, they argue, shed helpful light on fairness considerations

    THE ROLE OF SIMULATION IN SUPPORTING LONGER-TERM LEARNING AND MENTORING WITH TECHNOLOGY

    Get PDF
    Mentoring is an important part of professional development and longer-term learning. The nature of longer-term mentoring contexts means that designing, developing, and testing adaptive learning sys-tems for use in this kind of context would be very costly as it would require substantial amounts of fi-nancial, human, and time resources. Simulation is a cheaper and quicker approach for evaluating the impact of various design and development decisions. Within the Artificial Intelligence in Education (AIED) research community, however, surprisingly little attention has been paid to how to design, de-velop, and use simulations in longer-term learning contexts. The central challenge is that adaptive learning system designers and educational practitioners have limited guidance on what steps to consider when designing simulations for supporting longer-term mentoring system design and development deci-sions. My research work takes as a starting point VanLehn et al.’s [1] introduction to applications of simulated students and Erickson et al.’s [2] suggested approach to creating simulated learning envi-ronments. My dissertation presents four research directions using a real-world longer-term mentoring context, a doctoral program, for illustrative purposes. The first direction outlines a framework for guid-ing system designers as to what factors to consider when building pedagogical simulations, fundamen-tally to answer the question: how can a system designer capture a representation of a target learning context in a pedagogical simulation model? To illustrate the feasibility of this framework, this disserta-tion describes how to build, the SimDoc model, a pedagogical model of a longer-term mentoring learn-ing environment – a doctoral program. The second direction builds on the first, and considers the issue of model fidelity, essentially to answer the question: how can a system designer determine a simulation model’s fidelity to the desired granularity level? This dissertation shows how data from a target learning environment, the research literature, and common sense are combined to achieve SimDoc’s medium fidelity model. The third research direction explores calibration and validation issues to answer the question: how many simulation runs does it take for a practitioner to have confidence in the simulation model’s output? This dissertation describes the steps taken to calibrate and validate the SimDoc model, so its output statistically matches data from the target doctoral program, the one at the university of Saskatchewan. The fourth direction is to demonstrate the applicability of the resulting pedagogical model. This dissertation presents two experiments using SimDoc to illustrate how to explore pedagogi-cal questions concerning personalization strategies and to determine the effectiveness of different men-toring strategies in a target learning context. Overall, this dissertation shows that simulation is an important tool in the AIED system design-ers’ toolkit as AIED moves towards designing, building, and evaluating AIED systems meant to support learners in longer-term learning and mentoring contexts. Simulation allows a system designer to exper-iment with various design and implementation decisions in a cost-effective and timely manner before committing to these decisions in the real world
    • …
    corecore