36 research outputs found

    Carelessness and Affect in an Intelligent Tutoring System for Mathematics

    Get PDF
    We investigate the relationship between students’ affect and their frequency of careless errors while using an Intelligent Tutoring System for middle school mathematics. A student is said to have committed a careless error when the student’s answer is wrong despite knowing the skill required to provide the correct answer. We operationalize the probability that an error is careless through the use of an automated detector, developed using educational data mining, which infers the probability that an error involves carelessness rather than not knowing the relevant skill. This detector is then applied to log data produced by high-school students in the Philippines using a Cognitive Tutor for scatterplots. We study the relationship between carelessness and affect, triangulating between the detector of carelessness and field observations of affect. Surprisingly, we find that carelessness is common among students who frequently experience engaged concentration. This finding implies that a highly engaged student may paradoxically become overconfident or impulsive, leading to more careless errors. In contrast, students displaying confusion or boredom make fewer careless errors. Further analysis over time suggests that confused and bored students have lower learning overall. Thus, their mistakes appear to stem from a genuine lack of knowledge rather than carelessness

    Carelessness and Affect in an Intelligent Tutoring System for Mathematics

    Get PDF
    We investigate the relationship between students' affect and their frequency of careless errors while using an Intelligent Tutoring System for middle school mathematics. A student is said to have committed a careless error when the student's answer is wrong despite knowing the skill required to provide the correct answer. We operationalize the probability that an error is careless through the use of an automated detector, developed using educational data mining, which infers the probability that an error involves carelessness rather than not knowing the relevant skill. This detector is then applied to log data produced by high-school students in the Philippines using a Cognitive Tutor for scatterplots. We study the relationship between carelessness and affect, triangulating between the detector of carelessness and field observations of affect. Surprisingly, we find that carelessness is common among students who frequently experience engaged concentration. This finding implies that a highly engaged student may paradoxically become overconfident or impulsive, leading to more careless errors. In contrast, students displaying confusion or boredom make fewer careless errors. Further analysis over time suggests that confused and bored students have lower learning overall. Thus, their mistakes appear to stem from a genuine lack of knowledge rather than carelessness

    EDM 2011: 4th international conference on educational data mining : Eindhoven, July 6-8, 2011 : proceedings

    Get PDF

    Discovery with Models: A Case Study on Carelessness in Computer-based Science Inquiry

    Get PDF
    In recent years, an increasing number of analyses in Learning Analytics and Educational Data Mining (EDM) have adopted a "Discovery with Models" approach, where an existing model is used as a key component in a new EDM/analytics analysis. This article presents a theoretical discussion on the emergence of discovery with models, its potential to enhance research on learning and learners, and key lessons learned in how discovery with models can be conducted validly and effectively. We illustrate these issues through discussion of a case study where discovery with models was used to investigate a form of disengaged behavior, i.e., carelessness, in the context of middle school computer-based science inquiry. This behavior has been acknowledged as a problem in education as early as the 1920s. With the increasing use of high-stakes testing, the cost of student carelessness can be higher. For instance, within computer-based learning environments careless errors can result in reduced educational effectiveness, with students continuing to receive material they have already mastered. Despite the importance of this problem, it has received minimal research attention, in part due to difficulties in operationalizing carelessness as a construct. Building from theory on carelessness and a Bayesian framework for knowledge modeling, we use machine-learned detectors to predict carelessness within authentic use of a computer-based learning environment. We then use a discovery with models approach to link these validated carelessness measures to survey data, to study the correlations between the prevalence of carelessness and student goal orientation. The second construct, carelessness, refers to incorrect answers given by a student on material that the student should be able to answer correctly Rodriguez-Fornells & Maydeu-Olivares, 2000). The application of discovery with models involves two main phases. First, a model of a construct is developed using machine learning or knowledge engineering techniques, and is then validated, as discussed below. Second, this validated model is applied to data and used as a component in another analysis: For example, for identifying outliers through model predictions; examining which variables best predict the modeled construct; finding relationships between the construct and other variables using correlations, predictions, associations rules, causal relationships or other methods; or studying the contexts where the construct occurs, including its prevalence across domains, systems, or populations. For example, in One essential question to pose prior to a discovery with model analysis is whether the model adopted is valid, both overall, and for the specific situation in which it is being used. Ideally, a model should be validated using an approach such as cross-validation, where the model is repeatedly trained on one portion of the data and tested on a different portion, with model predictions compared to appropriate external measures, for example assessments made by humans with acceptably high inter-rater reliability, such as field observations of student behavior for gaming the system (cf. Even after validating in this fashion, validity should be re-considered if the model is used for a substantially different population or context than was used when developing the model.. An alternative approach is to use a simpler knowledge-engineered definition, rationally deriving a function/rule that is then applied to the data. In this case, the model can be inferred to have face validity. However, knowledge-engineered models often DISCOVERY WITH MODELS: A CASE STUDY ON CARELESSNESS 6 produce different results than machine learning-based models, for example in the case of gaming the system. Research studying whether student or content is a better predictor of gaming the system identified different results, depending on which model was applied (cf. Baker, 2007a

    Characterizing Productive Perseverance Using Sensor-Free Detectors of Student Knowledge, Behavior, and Affect

    Get PDF
    Failure is a necessary step in the process of learning. For this reason, there has been a myriad of research dedicated to the study of student perseverance in the presence of failure, leading to several commonly-cited theories and frameworks to characterize productive and unproductive representations of the construct of persistence. While researchers are in agreement that it is important for students to persist when struggling to learn new material, there can be both positive and negative aspects of persistence. What is it, then, that separates productive from unproductive persistence? The purpose of this work is to address this question through the development, extension, and study of data-driven models of student affect, behavior, and knowledge. The increased adoption of computer-based learning platforms in real classrooms has led to unique opportunities to study student learning at both fine levels of granularity and longitudinally at scale. Prior work has leveraged machine learning methods, existing learning theory, and previous education research to explore various aspects of student learning. These include the development of sensor-free detectors that utilize only the student interaction data collected through such learning platforms. Building off of the considerable amount of prior research, this work employs state-of-the-art machine learning methods in conjunction with the large scale granular data collected by computer-based learning platforms in alignment with three goals. First, this work focuses on the development of student models that study learning through the use of advancements in student modeling and deep learning methodologies. Second, this dissertation explores the development of tools that incorporate such models to support teachers in taking action in real classrooms to promote productive approaches to learning. Finally, this work aims to complete the loop in utilizing these detector models to better understand the underlying constructs that are being measured through their application and their connection to productive perseverance and commonly-observed learning outcomes

    Learning from errors in the adaptive mathematics tutoring system

    Get PDF
    Errors are considered to play a crucial role in facilitating self-reflection and knowledge acquisition. Nevertheless, nowadays how students benefit from errors in learning is still open to debate. That is, help is superior to practice for learning from errors or not. The goal of this dissertation is to systematically explore how students use help and practice to learn from errors in ALEKS (i.e. Assessment and LEarning in Knowledge Spaces), an adaptive math learning system. Based on the theoretical framework, the learning phase theory, this dissertation defined strategies to learn from errors in three types: help strategy (requesting worked examples in the next two steps after an error); practice strategies (solving problems in the next two steps after an error); mixed strategies (requesting a worked example and solving a problem in the next two steps after an error). Practice strategies were decomposed into four sub-categories: giving two wrong answers after an error; giving a wrong answer and then a correct answer after an error; giving a correct answer and then a wrong answer after an error; giving two correct answers after an error. Mixed strategies were composed of four sub-categories: requesting a worked example then giving a wrong answer after an error; requesting a worked example then giving a correct answer after an error; giving a wrong answer then requesting a worked example after an error; giving a correct answer then requesting a worked example after an error. In addition, the dissertation considered the learning process as three learning phases: the low-skill phase, the medium-skill phase, and the high-skill phase. Specifically, this dissertation examined the likelihoods of strategies occurring after errors, strategy shifts, and the changes of strategies in different learning phases. Additionally, the dissertation investigated the relationships of prior knowledge, error types and topic difficulty with strategies for learning from errors. Furthermore, the dissertation examined the relationships of strategies for learning from errors with learning outcomes (i.e. immediate and delayed learning outcomes) as well as the relationships between different levels of prior knowledge or topic difficulty.The analysis was applied to two datasets: 6th graders (N = 165) and college students (N = 179). The results of the one-way ANOVA suggested that students were most likely to utilize mixed strategies after making errors. The results of comparisons of strategies likelihoods in different learning phases indicated that students were inclined to use the strategy of requesting a worked example and then solving a problem after an error in the low-skill phase, the strategy of solving a problem after an error and then requesting a worked example in the medium-skill phase, and practice strategies in the high-skill phase. Additionally, help strategy was found to be used frequently in the medium-skill phase and the high-skill phase, and practice strategies were used frequently in the low-skill phase and the medium-skill phase. For strategy shifts, it was found that help strategy tended to transition to help strategy. Mixed strategies tended to transition to mixed strategies. Practice strategies tended to transition to practice strategies. In the low-skill phase, the strategies were more likely to transition to the strategy of requesting a worked example and then solving a problem after the next error. In the medium-skill phase, the strategies were more likely to transition to the strategy of solving a problem and then requesting a worked example after the next error. In the high-skill phase, the strategies were more likely to transition to practice strategies. The practice strategies tended to transition to mixed strategies or help strategy in the medium-skill phase and the high-skill phase as well.Only 6th graders with the high prior knowledge were more likely to use the strategy of giving a wrong answer and then requesting a worked example after an error than those with the low prior knowledge. For students with the low prior knowledge, the strategies were inclined to transition to practice strategies whereas students with the high prior knowledge were apt to transition to the strategy of requesting a worked example and then solving a problem after an error. Furthermore, students with the high prior knowledge presented a more disorder pattern of using strategies. When making careless errors, students tended to use practice strategies and presented a more disorder pattern of using strategies. For different levels of topic difficulty, 6th graders were inclined to use practice strategies or the strategy of requesting a worked example and then giving a wrong answer after an error on difficult topics, whereas they tended to utilize the strategy of giving a wrong answer and then requesting a worked example after an error on easy topics. On easy topics, students presented a more disordered pattern of using strategies. As for strategy shifts, students tended to transition to the strategy of requesting a worked example and then solving a problem after an error on difficult topics and preferred to transition to practice strategies on easy topics. The correctness after the strategy of giving two correct answers after an error was the highest among all the strategies. This finding was stable between different levels of prior knowledge and topic difficulty. Help strategy, the strategy of requesting a worked example and then solving a problem after an error, the strategy of giving two wrong answers after an error, and the strategy of giving a wrong answer then a correct answer after an error benefited college students delayed performance. However, the strategy of giving a wrong answer and then requesting a worked example after an error, the strategy of giving a correct answer and then requesting a worked example, and the strategy of giving two correct answers after an error hindered delayed performance. This finding remained stable across different levels of college students prior knowledge as well. In the low-skill phase, college students benefited from shifts to practice strategies while the shifts to strategies that involved requesting worked examples (i.e. help strategies and mixed strategies) did not facilitate their delayed performance. In the medium-skill phase and the high-skill phase, the shifts to strategies that involved requesting worked examples facilitated delayed performance. In addition, college students with both low and high prior knowledge benefited from strategy shifts to practice strategies.The study revealed students strategies to learn from errors in the adaptive learning system and build a foundation for a finer investigation on students strategies to learn from errors in future. Moreover, the findings will hopefully provide insights to understand students learning strategies and improve the effectiveness of intelligent tutoring systems

    Modes and Mechanisms of Game-like Interventions in Intelligent Tutoring Systems

    Get PDF
    While games can be an innovative and a highly promising approach to education, creating effective educational games is a challenge. It requires effectively integrating educational content with game attributes and aligning cognitive and affective outcomes, which can be in conflict with each other. Intelligent Tutoring Systems (ITS), on the other hand, have proven to be effective learning environments that are conducive to strong learning outcomes. Direct comparisons between tutoring systems and educational games have found digital tutors to be more effective at producing learning gains. However, tutoring systems have had difficulties in maintaining students€™ interest and engagement for long periods of time, which limits their ability to generate learning in the long-term. Given the complementary benefits of games and digital tutors, there has been considerable effort to combine these two fields. This dissertation undertakes and analyzes three different ways of integrating Intelligent Tutoring Systems and digital games. We created three game-like systems with cognition, metacognition and affect as their primary target and mode of intervention. Monkey\u27s Revenge is a game-like math tutor that offers cognitive tutoring in a game-like environment. The Learning Dashboard is a game-like metacognitive support tool for students using Mathspring, an ITS. Mosaic comprises a series of mini-math games that pop-up within Mathspring to enhance students\u27 affect. The methodology consisted of multiple randomized controlled studies ran to evaluate each of these three interventions, attempting to understand their effect on students€™ performance, affect and perception of the intervention and the system that embeds it. Further, we used causal modeling to further explore mechanisms of action, the inter-relationships between student€™s incoming characteristics and predispositions, their mechanisms of interaction with the tutor, and the ultimate learning outcomes and perceptions of the learning experience
    corecore