31,938 research outputs found

    Knowledge Tracing Challenge: Optimal Activity Sequencing for Students

    Full text link
    Knowledge tracing is a method used in education to assess and track the acquisition of knowledge by individual learners. It involves using a variety of techniques, such as quizzes, tests, and other forms of assessment, to determine what a learner knows and does not know about a particular subject. The goal of knowledge tracing is to identify gaps in understanding and provide targeted instruction to help learners improve their understanding and retention of material. This can be particularly useful in situations where learners are working at their own pace, such as in online learning environments. By providing regular feedback and adjusting instruction based on individual needs, knowledge tracing can help learners make more efficient progress and achieve better outcomes. Effectively solving the KT problem would unlock the potential of computer-aided education applications such as intelligent tutoring systems, curriculum learning, and learning materials recommendations. In this paper, we will present the results of the implementation of two Knowledge Tracing algorithms on a newly released dataset as part of the AAAI2023 Global Knowledge Tracing Challenge

    Forgetting-aware Linear Bias for Attentive Knowledge Tracing

    Full text link
    Knowledge Tracing (KT) aims to track proficiency based on a question-solving history, allowing us to offer a streamlined curriculum. Recent studies actively utilize attention-based mechanisms to capture the correlation between questions and combine it with the learner's characteristics for responses. However, our empirical study shows that existing attention-based KT models neglect the learner's forgetting behavior, especially as the interaction history becomes longer. This problem arises from the bias that overprioritizes the correlation of questions while inadvertently ignoring the impact of forgetting behavior. This paper proposes a simple-yet-effective solution, namely Forgetting-aware Linear Bias (FoLiBi), to reflect forgetting behavior as a linear bias. Despite its simplicity, FoLiBi is readily equipped with existing attentive KT models by effectively decomposing question correlations with forgetting behavior. FoLiBi plugged with several KT models yields a consistent improvement of up to 2.58% in AUC over state-of-the-art KT models on four benchmark datasets.Comment: In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (CIKM'23), 5 pages, 3 figures, 2 table

    Knowledge Relation Rank Enhanced Heterogeneous Learning Interaction Modeling for Neural Graph Forgetting Knowledge Tracing

    Full text link
    Recently, knowledge tracing models have been applied in educational data mining such as the Self-attention knowledge tracing model(SAKT), which models the relationship between exercises and Knowledge concepts(Kcs). However, relation modeling in traditional Knowledge tracing models only considers the static question-knowledge relationship and knowledge-knowledge relationship and treats these relationships with equal importance. This kind of relation modeling is difficult to avoid the influence of subjective labeling and considers the relationship between exercises and KCs, or KCs and KCs separately. In this work, a novel knowledge tracing model, named Knowledge Relation Rank Enhanced Heterogeneous Learning Interaction Modeling for Neural Graph Forgetting Knowledge Tracing(NGFKT), is proposed to reduce the impact of the subjective labeling by calibrating the skill relation matrix and the Q-matrix and apply the Graph Convolutional Network(GCN) to model the heterogeneous interactions between students, exercises, and skills. Specifically, the skill relation matrix and Q-matrix are generated by the Knowledge Relation Importance Rank Calibration method(KRIRC). Then the calibrated skill relation matrix, Q-matrix, and the heterogeneous interactions are treated as the input of the GCN to generate the exercise embedding and skill embedding. Next, the exercise embedding, skill embedding, item difficulty, and contingency table are incorporated to generate an exercise relation matrix as the inputs of the Position-Relation-Forgetting attention mechanism. Finally, the Position-Relation-Forgetting attention mechanism is applied to make the predictions. Experiments are conducted on the two public educational datasets and results indicate that the NGFKT model outperforms all baseline models in terms of AUC, ACC, and Performance Stability(PS).Comment: 11 pages, 3 figure

    Difficulty-Focused Contrastive Learning for Knowledge Tracing with a Large Language Model-Based Difficulty Prediction

    Full text link
    This paper presents novel techniques for enhancing the performance of knowledge tracing (KT) models by focusing on the crucial factor of question and concept difficulty level. Despite the acknowledged significance of difficulty, previous KT research has yet to exploit its potential for model optimization and has struggled to predict difficulty from unseen data. To address these problems, we propose a difficulty-centered contrastive learning method for KT models and a Large Language Model (LLM)-based framework for difficulty prediction. These innovative methods seek to improve the performance of KT models and provide accurate difficulty estimates for unseen data. Our ablation study demonstrates the efficacy of these techniques by demonstrating enhanced KT model performance. Nonetheless, the complex relationship between language and difficulty merits further investigation.Comment: 10 pages, 4 figures, 2 table
    • …
    corecore