24 research outputs found

    Five Lenses on Team Tutor Challenges: A Multidisciplinary Approach

    Get PDF
    This chapter describes five disciplinary domains of research or lenses that contribute to the design of a team tutor. We focus on four significant challenges in developing Intelligent Team Tutoring Systems (ITTSs), and explore how the five lenses can offer guidance for these challenges. The four challenges arise in the design of team member interactions, performance metrics and skill development, feedback, and tutor authoring. The five lenses or research domains that we apply to these four challenges are Tutor Engineering, Learning Sciences, Science of Teams, Data Analyst, and Human–Computer Interaction. This matrix of applications from each perspective offers a framework to guide designers in creating ITTSs

    Understanding when students are active‐in‐thinking through modeling‐in‐context

    No full text
    Learning-in-action depends on interactions with learning content, peers and real world problems. However, effective learning-in-action also depends on the extent to which students are active-in-thinking, making meaning of their learning experience. A critical component of any technology to support active thinking is the ability to ascertain whether (or to what extent) students have succeeded in internalizing the disciplinary strategies, norms of thinking, discourse practices and habits of mind that characterize deep understanding in a domain. This presents what we call a dilemma of modeling-in-context: teachers routinely analyze this kind of thinking for small numbers of students in activities they create or customize for the needs of their students; however, doing so at scale and in real-time requires some automated processes for modeling student work. Current techniques for developing models that reflect specific pedagogical activities and learning objectives that a teacher might create require either more expertise or more time than teachers have. In this paper, we examine a theoretical approach to addressing the problem of modeling active thinking in its pedagogical context that uses teacher-created rubrics to generate models of student work. The results of this examination show how appropriately constructed learning technologies can enable teachers to develop custom automated rubrics for modeling active thinking and meaning-making from the records of students\u27 dialogic work. Practitioner Notes What is already known about this topic Many immersive educational technologies, such as digital games and simulations, enable students to take consequential action in a realistic context and to interact with peers, mentors and pedagogical agents. Such technologies help students to be active-in-thinking: engaging deeply with, reflecting on and otherwise making meaning of their learning experience. There are now many immersive educational technologies with integrated authoring tools that enable teachers to customize the learning experience with relative ease, reducing barriers to adoption and improving student learning. Educational technologies that support learning-in-action typically contain student models that operate in real-time to control the behavior of pedagogical agents, deliver just-in-time interventions, select an appropriate content or otherwise measure and promote active thinking, but these student models may not work appropriately if teachers customize the learning experience. Much as there are authoring tools that allow teachers to customize the curriculum of a given learning technology, there is a need for authoring tools that allow teachers to customize the associated student models as well. What this paper adds This paper presents a novel, rubric-based approach to develop automated student models for new activities that teachers develop in digital learning environments that promote active thinking. Our approach combines machine learning techniques with teacher expertise, allowing teachers to participate in the design of automated student models of active thinking that with further development could be scaled by leveraging their skills in rubric development. Our results show that a rubric-based approach can outperform a machine learning approach in this context. More importantly, in some cases, the rubric-based approach can produce reliable automated models based on the information that a teacher can easily provide. Implications for practice and/or policy If integrated into authoring tools, the rubric-based approach could allow teachers to participate in the design of automated models for educational technologies customized to their instructional needs. Through this design process, teachers could develop a better understanding of how the automated modeling system works, which in turn could increase the adoption of educational technologies that promote active thinking. Because the rubric-based approach enables teachers to identify key connections among concepts relevant to the pedagogical context, rather than general concepts or linguistic features, it is more likely to facilitate targeted feedback to help promote the development of active thinking

    Challenges And Propositions For Developing Effective Team Training With Adaptive Tutors

    No full text
    A key challenge for cost-effective Intelligent Tutoring Systems (ITSs) is the ability to create generalizable domain, learner, and pedagogical models so they can be re-used many times over. Investment in this technology will be needed to succeed in developing ITSs for team training. The purpose of this chapter is to propose an instructional framework for guiding team ITS researchers in their development of these models for reuse. We establish a foundation for the framework with three propositions. First, we propose that understanding how teams develop is needed to establish a science-based foundation for modeling. Toward this end, we conduct a detailed exploration of the Kozlowski, Watola, Jensen, Kim, and Botero (2009) theory of team development and leadership, and describe a use case example to demonstrate how team training was developed for a specific stage in their model. Next, we propose that understanding measures of learning and performance will inform learner modeling requirements for each stage of team development. We describe measures developed for the use case and how they were used to understand teamwork skill development. We then discuss effective team training strategies and explain how they were implemented in the use case to understand their implications for pedagogical modeling. From this exploration, we describe a generic instructional framework recommending effective training strategies for each stage of team development. To inform the development of reusable models, we recommend selecting different team task domains and varying team size to begin researching commonalities and differences in the instructional framework
    corecore