12 research outputs found

    FAST: Feature-Aware Student Knowledge Tracing

    Get PDF
    Various kinds of e-learning systems, such as Massively Open Online Courses and intelligent tutoring systems, are now producing amounts of feature-rich data from students solving items at different levels of proficiency over time. To analyze such data, researchers often use Knowledge Tracing [4], a 20-year old method that has become the de-facto standard for inferring student’s knowledge from performance data. Knowledge Tracing uses Hidden Markov Models (HMM) to estimate the latent cognitive state (student’s knowledge) from the student’s performance answering items. Since the original Knowledge Tracing formulation does not allow to model general features, a considerable amount of research has focused on ad-hoc modifications to the Knowledge Tracing algorithm to enable modeling a specific feature of interest. This has led to a plethora of different Knowledge Tracing reformulations for very specific purposes. For example, Pardos et al. [5] proposed a new model to measure the effect of students’ individual characteristics, Beck et al. [2] modified Knowledge Tracing to assess the effect of help in a tutor system, and Xu and Mostow [7] proposed a new model that allows measuring the effect of subskills. These ad hoc models are successful for their own specific purpose, but they do not generalize to arbitrary features. Other student modeling methods which allow more flexible features have been proposed. For example, Performance Factor Analysis [6] uses logistic regression to model arbitrary features, but unfortunately it does not make inferences of whether the student has learned a skill. We present FAST (Feature-Aware Student knowledge Tracing), a novel method that allows general features into Knowledge Tracing. FAST combines Performance Factor Analysis (logistic regression) with Knowledge Tracing, by leveraging on previous work on unsupervised learning with features [3]. Therefore, FAST is able to infer student’s knowledge, like Knowledge Tracing does, while also allowing for arbitrary features, like Performance Factor Analysis does. FAST allows general features into Knowledge Tracing by replacing the generative emission probabilities (often called guess and slip probabilities) with logistic regression [3], so that these probabilities can change with time to infer student’s knowledge. FAST allows arbitrary features to train the logistic regression model and the HMM jointly. Training the parameters simultaneously enables FAST to learn from the features. This differs from using regression to analyze the slip and guess probabilities [1]. To validate our approach, we use data collected from real students interacting with a tutor. We present experimental results comparing FAST with Knowledge Tracing and Performance Factor Analysis. We conduct experiments with our model using features like item difficulty, prior successes and failures of a student for the skill (or multiple skills) associated with the item, according to the formulation of Performance Factor Analysis

    General Features in Knowledge Tracing to Model Multiple Subskills, Temporal Item Response Theory, and Expert Knowledge

    Get PDF
    Knowledge Tracing is the de-facto standard for inferring student knowledge from performance data. Unfortunately, it does not allow modeling the feature-rich data that is now possible to collect in modern digital learning environments. Because of this, many ad hoc Knowledge Tracing variants have been proposed to model a specific feature of interest. For example, variants have studied the effect of students’ individual characteristics, the effect of help in a tutor, and subskills. These ad hoc models are successful for their own specific purpose, but are specified to only model a single specific feature. We present FAST (Feature Aware Student knowledge Tracing), an efficient, novel method that allows integrating general features into Knowledge Tracing. We demonstrate FAST’s flexibility with three examples of feature sets that are relevant to a wide audience. We use features in FAST to model (i) multiple subskill tracing, (ii) a temporal Item Response Model implementation, and (iii) expert knowledge. We present empirical results using data collected from an Intelligent Tutoring System. We report that using features can improve up to 25% in classification performance of the task of predicting student performance. Moreover, for fitting and inferencing, FAST can be 300 times faster than models created in BNT-SM, a toolkit that facilitates the creation of ad hoc Knowledge Tracing variants

    ESTUDIO DE LOS PRINCIPALES MÉTODOS PARA ESTIMAR EL CONOCIMIENTO LATENTE DE LOS ESTUDIANTES EN SISTEMAS ELEARNING

    Get PDF
    Aumentar  el conocimiento de  los estudiantes es la meta primaria de la educaciĂłn. Por tanto, la mediciĂłn del conocimiento adquirido por los estudiantes adquiere vital importancia en el proceso educativo. La EstimaciĂłn del Conocimiento Latente es una de las tareas predictivas de la MinerĂ­a de Datos Educativos que estĂĄ encargada de determinar en quĂ© medida un estudiante posee un conocimiento o habilidad determinada en un momento dado. En el presente trabajo se realiza un estudio de los principales mĂ©todos empleados para estimar el conocimiento  latente entre los que se encuentran Bayesian Knowledge Tracing, Performance Factor Analysis e Item Response Theory

    Evaluation of topic-based adaptation and student modeling in QuizGuide

    Get PDF
    This paper presents an in-depth analysis of a nonconventional topic-based personalization approach for adaptive educational systems (AES) that we have explored for a number of years in the context of university programming courses. With this approach both student modeling and adaptation are based on coarse-grained knowledge units that we called topics. Our motivation for the topic-based personalization was to enhance AES transparency for both teachers and students by utilizing typical topic-based course structures as the foundation for designing all aspects of an AES from the domain model to the end-user interface. We illustrate the details of the topic-based personalization technology, with the help of the Web-based educational service QuizGuide—the first system to implement it. QuizGuide applies the topic-based personalization to guide students to the right learning material in the context of an undergraduate C programming course. While having a number of architectural and practical advantages, the suggested coarse-grained personalization approach deviates from the common practices toward knowledge modeling in AES. Therefore, we believe that several aspects of QuizGuide required a detailed evaluation—from modeling accuracy to the effectiveness of adaptation. The paper discusses how this new student modeling approach can be evaluated, and presents our attempts to evaluate it from multiple different prospects. The evaluation of QuizGuide across several consecutive semesters demonstrates that, although topics do not always support precise user modeling, they can provide a basis for successful personalization in AESs

    EDM 2011: 4th international conference on educational data mining : Eindhoven, July 6-8, 2011 : proceedings

    Get PDF

    Détection et amélioration de l'état cognitif de l'apprenant

    Full text link
    Cette thĂšse vise Ă  dĂ©tecter et amĂ©liorer l’état cognitif de l’apprenant. Cet Ă©tat est dĂ©fini par la capacitĂ© d’acquĂ©rir de nouvelles connaissances et de les stocker dans la mĂ©moire. Nous nous sommes essentiellement intĂ©ressĂ©s Ă  amĂ©liorer le raisonnement des apprenants, et ceci dans trois environnements : environnement purement cognitif Logique, jeu sĂ©rieux LewiSpace et jeu sĂ©rieux intelligent Inertia. La dĂ©tection de cet Ă©tat se fait essentiellement par des mesures physiologiques (en particulier les Ă©lectroencĂ©phalogrammes) afin d’avoir une idĂ©e sur les interactions des apprenants et l’évolution de leurs Ă©tats mentaux. L’amĂ©lioration des performances des apprenants et de leur raisonnement est une clĂ© pour la rĂ©ussite de l’apprentissage. Dans une premiĂšre partie, nous prĂ©sentons l’implĂ©mentation de l’environnement cognitif logique. Nous dĂ©crivons des statistiques faites sur cet environnement. Nous avons collectĂ© durant une Ă©tude expĂ©rimentale les donnĂ©es sur l’engagement, la charge cognitive et la distraction. Ces trois mesures se sont montrĂ©es efficaces pour la classification et la prĂ©diction des performances des apprenants. Dans une deuxiĂšme partie, nous dĂ©crivons le jeu Lewispace pour l’apprentissage des diagrammes de Lewis. Nous avons menĂ© une Ă©tude expĂ©rimentale et collectĂ© les donnĂ©es des Ă©lectroencĂ©phalogrammes, des Ă©motions et des traceurs de regard. Nous avons montrĂ© qu’il est possible de prĂ©dire le besoin d’aide dans cet environnement grĂące Ă  ces mesures physiologiques et des algorithmes d’apprentissage machine. Dans une troisiĂšme partie, nous clĂŽturons la thĂšse en prĂ©sentant des stratĂ©gies d’aide intĂ©grĂ©es dans un jeu virtuel Inertia (jeu de physique). Cette derniĂšre s’adapte selon deux mesures extraites des Ă©lectroencĂ©phalogrammes (l’engagement et la frustration). Nous avons montrĂ© que ce jeu permet d’augmenter le taux de rĂ©ussite dans ses missions, la performance globale et par consĂ©quent amĂ©liorer l’état cognitif de l’apprenant.This thesis aims at detecting and enhancing the cognitive state of a learner. This state is measured by the ability to acquire new knowledge and store it in memory. Focusing on three types of environments to enhance reasoning: environment Logic, serious game LewiSpace and intelligent serious game Inertia. Physiological measures (in particular the electroencephalograms) have been taken in order to measure learners’ engagement and mental states. Improving learners’ reasoning is key for successful learning process. In a first part, we present the implementation of logic environment. We present statistics on this environment, with data collected during an experimental study. Three types of data: engagement, workload and distraction, these measures were effective and can predict and classify learner’s performance. In a second part, we describe the LewiSpace game, aimed at teaching Lewis diagrams. We conducted an experimental study and collected data from electroencephalograms, emotions and eye-tracking software. Combined with machine learning algorithms, it is possible to anticipate a learner’s need for help using these data. In a third part, we finish by presenting some assistance strategies in a virtual reality game called Inertia (to teach Physics). The latter adapts according to two measures extracted from electroencephalograms (frustration and engagement). Based on our study, we were able to enhance the learner’s success rate on game missions, by improving its cognitive state
    corecore