787 research outputs found

    A Comparative Analysis of Cognitive Tutoring and Constraint-Based Modeling

    Get PDF
    Numerous approaches to student modeling have been proposed since the inception of the field more than three decades ago. hat the field is lacking completely is comparative analyses of different student modeling approaches. Such analyses are sorely needed, as they can identify the most promising approaches and provide guidelines for future research. In this paper we compare Cognitive Tutoring to Constraint-Based Modeling (CBM). We present our experiences in implementing a database design tutor using both methodologies and highlight their strengths and weaknesses. We compare their characteristics and argue the differences are often more apparent than real. For specific domains, one approach may be favoured over the other, making them viable complementary methods for supporting learning

    Is AI the better programming partner? Human-Human Pair Programming vs. Human-AI pAIr Programming

    Full text link
    The emergence of large-language models (LLMs) that excel at code generation and commercial products such as GitHub's Copilot has sparked interest in human-AI pair programming (referred to as "pAIr programming") where an AI system collaborates with a human programmer. While traditional pair programming between humans has been extensively studied, it remains uncertain whether its findings can be applied to human-AI pair programming. We compare human-human and human-AI pair programming, exploring their similarities and differences in interaction, measures, benefits, and challenges. We find that the effectiveness of both approaches is mixed in the literature (though the measures used for pAIr programming are not as comprehensive). We summarize moderating factors on the success of human-human pair programming, which provides opportunities for pAIr programming research. For example, mismatched expertise makes pair programming less productive, therefore well-designed AI programming assistants may adapt to differences in expertise levels.Comment: 8 pages (without references), 2 table

    Evaluating and improving adaptive educational systems with learning curves

    Get PDF
    Personalised environments such as adaptive educational systems can be evaluated and compared using performance curves. Such summative studies are useful for determining whether or not new modifications enhance or degrade performance. Performance curves also have the potential to be utilised in formative studies that can shape adaptive model design at a much finer level of granularity. We describe the use of learning curves for evaluating personalised educational systems and outline some of the potential pitfalls and how they may be overcome. We then describe three studies in which we demonstrate how learning curves can be used to drive changes in the user model. First, we show how using learning curves for subsets of the domain model can yield insight into the appropriateness of the model’s structure. In the second study we use this method to experiment with model granularity. Finally, we use learning curves to analyse a large volume of user data to explore the feasibility of using them as a reliable method for fine-tuning a system’s model. The results of these experiments demonstrate the successful use of performance curves in formative studies of adaptive educational systems

    Evaluating the Effectiveness of tutorial dialogue instruction in a Explotary learning context

    Get PDF
    [Proceedings of] ITS 2006, 8th International Conference on Intelligent Tutoring Systems, 26-30 June 2006, Jhongli, Taoyuan County, TaiwanIn this paper we evaluate the instructional effectiveness of tutorial dialogue agents in an exploratory learning setting. We hypothesize that the creative nature of an exploratory learning environment creates an opportunity for the benefits of tutorial dialogue to be more clearly evidenced than in previously published studies. In a previous study we showed an advantage for tutorial dialogue support in an exploratory learning environment where that support was administered by human tutors [9]. Here, using a similar experimental setup and materials, we evaluate the effectiveness of tutorial dialogue agents modeled after the human tutors from that study. The results from this study provide evidence of a significant learning benefit of the dialogue agentsThis project is supported by ONR Cognitive and Neural Sciences Division, Grant number N000140410107proceedingPublicad

    Simulated Students and Classroom Use of Model-Based Intelligent Tutoring

    Get PDF
    Two educational uses of models and simulations: 1) Students create models and use simulations ; and 2) Researchers create models of learners to guide development of reliably effective materials. Cognitive tutors simulate and support tutoring - data is crucial to create effective model. Pittsburgh Science of Learning Center: Resources for modeling, authoring, experimentation. Repository of data and theory. Examples of advanced modeling efforts: SimStudent learns rule-based model. Help-seeking model: Tutors metacognition. Scooter uses machine learning detectors of student engagement

    Expert interpretation of bar and line graphs: The role of graphicacy in reducing the effect of graph format.

    Get PDF
    The distinction between informational and computational equivalence of representations, first articulated by Larkin and Simon (1987) has been a fundamental principle in the analysis of diagrammatic reasoning which has been supported empirically on numerous occasions. We present an experiment that investigates this principle in relation to the performance of expert graph users of 2 × 2 'interaction' bar and line graphs. The study sought to determine whether expert interpretation is affected by graph format in the same way that novice interpretations are. The findings revealed that, unlike novices—and contrary to the assumptions of several graph comprehension models—experts' performance was the same for both graph formats, with their interpretation of bar graphs being no worse than that for line graphs. We discuss the implications of the study for guidelines for presenting such data and for models of expert graph comprehension

    A contingency analysis of LeActiveMath's learner model

    Get PDF
    We analyse how a learner modelling engine that uses belief functions for evidence and belief representation, called xLM, reacts to different input information about the learner in terms of changes in the state of its beliefs and the decisions that it derives from them. The paper covers xLM induction of evidence with different strengths from the qualitative and quantitative properties of the input, the amount of indirect evidence derived from direct evidence, and differences in beliefs and decisions that result from interpreting different sequences of events simulating learners evolving in different directions. The results here presented substantiate our vision of xLM is a proof of existence for a generic and potentially comprehensive learner modelling subsystem that explicitly represents uncertainty, conflict and ignorance in beliefs. These are key properties of learner modelling engines in the bizarre world of open Web-based learning environments that rely on the content+metadata paradigm
    corecore