9 research outputs found

    A Review on Prediction of Academic Performance of Students at-Risk Using Data Mining Techniques

    Get PDF
    Educational data mining is the procedure of converting raw data collected from educational databases into some useful information. It can be helpful in designing and answering research questions like performance prediction of students in academics, factors that affect the students’ performance, help the teachers in understanding the problems faced by the students to understand the course content and complexity of the subject taken so that the teachers can take timely action to control the dropout rate. This also includes improving the teaching learning process so that the interventions can be taken at the right time to improve the performance of the student. This paper is the review of the research work done in the field of educational data mining for the prediction of students’ performance. The factors that influence the performance of the students i.e. the type of classrooms they attend such as traditional or on-line, socio-economic, educational background of the family, attitude toward studies and challenges faced by the students during course progress. These factors leads to the categorization of the students into three groups “Low-Risk”: who have High probability of succeeding, “Medium-Risk”: who may succeed in their examination, “High-Risk”: who have High probability of failing or drop-out. It elaborates the different ways to improve the teaching learning process by providing the students personal assistance, notes, class-assignments and special class tests. The most efficient techniques that are used in educational data mining are also reviewed such as; classification, regression, clustering and and prediction

    When Easy Becomes Boring and Difficult Becomes Frustrating: Disentangling the Effects of Item Difficulty Level and Person Proficiency on Learning and Motivation.

    Get PDF
    The research on electronic learning environments has evolved towards creating adaptive learning environments. In this study, the focus is on adaptive curriculum sequencing, in particular, the efficacy of an adaptive curriculum sequencing algorithm based on matching the item difficulty level to the learner’s proficiency level. We therefore explored the effect of the relative difficulty level on learning outcome and motivation. Results indicate that, for learning environments consisting of questions focusing on just one dimension and with knowledge of correct response, it does not matter whether we present easy, moderate or difficult items or whether we present the items with a random mix of difficulty levels, regarding both learning and motivation

    Scaffolding vs. Hints in the Assistment System

    No full text
    Abstract. Razzaq et al, 2005 reported that the Assistment system was causing students to learn at the computer but we were not sure if that was simply due to students getting practice or more due to the "intelligent tutoring " that we created and force students to do if they get an item wrong. Our survey indicated that some students found being forced to do scaffolding sometimes frustrating. We were not sure if all of the time we invested into these "fancy" scaffolding questions was worth it. We conducted a simple experiment to see if students learned on a set of 4 items, if they were given the scaffolds compared with just being given hints that tried to TELL them the same information that the scaffolding questions tried to ASK from them. Our results show that students that were given the scaffolds performed better although the results were not always statistically significant.

    Assistance Ă  la construction et Ă  la comparaison de techniques de diagnostic des connaissances

    Get PDF
    Comparing and building knowledge diagnostic is a challenge in the field of Technology Enhanced Learning (TEL) systems. Knowledge diagnostic aims to infer the knowledge mastered or not by a student in a given learning domain (like mathematics for high school) using student traces recorded by the TEL system. Knowledge diagnostics are widely used, but they strongly depend on the learning domain and are not well formalized. Thus, there exists no method or tool to build, compare and evaluate different diagnostics applied on a given learning domain. Similarly, using a diagnostic in two different domain usually imply to implementing almost both from scratch. Yet, comparing and reusing knowledge diagnostics can lead to reduce the engineering cost, to reinforce the evaluation and finally help knowledge diagnostic designers to choose a diagnostic. We propose a method, refine in a first platform, to assist knowledge diagnostic designers to build and compare knowledge diagnostics, using a new formalization of the diagnostic and student traces. To help building diagnostics, we used a semi-automatic machine learning algorithm, guided by an ontology of the traces and the knowledge designed by the designer. To help comparing diagnostics, we use a set of comparison criteria (either statistical or specific to the field of TEL systems) applied on the results of each diagnostic on a given set of traces. The main contribution is that our method is generic over diagnostics, meaning that very different diagnostics can be built and compared, unlike previous work on this topic. We evaluated our work though three experiments. The first one was about applying our method on three different domains and set of traces (namely geometry, reading and surgery) to build and compare five different knowledge diagnostics in cross validation. The second experiment was about designing and implementing a new comparison criteria specific to TEL systems: the impact of knowledge diagnostic on a pedagogical decision, the choice of a type of help to give to a student. The last experiment was about designing and adding in our platform a new diagnostic, in collaboration with an expert in didactic.Cette thèse aborde la thématique de la comparaison et de la construction de diagnostics des connaissances dans les Environnements Informatiques pour l'Apprentissage Humain (EIAH). Ces diagnostics sont utilisés pour déterminer si les apprenants maîtrisent ou non les connaissances ou conceptions du domaine d'apprentissage (par exemple math au collège) à partir des traces collectées par l'EIAH. Bien que ces diagnostics soient récurrents dans les EIAH, ils sont fortement liés au domaine et ne sont que peu formalisés, si bien qu'il n'existe pas de méthode de comparaison pour les positionner entre eux et les valider. Pour la même raison, utiliser un diagnostic dans deux domaines différents implique souvent de le redévelopper en partie ou en totalité, sans réelle réutilisation. Pourtant, pouvoir comparer et réutiliser des diagnostics apporterait aux concepteurs d'EIAH plus de rigueur pour le choix, l'évaluation et le développement de ces diagnostics. Nous proposons une méthode d'assistance à la construction et à la comparaison de diagnostics des connaissances, réifiée dans une première plateforme, en se basant sur une formalisation du diagnostic des connaissances en EIAH que nous avons défini et sur l'utilisation de traces d'apprenant. L'assistance à la construction se fait via un algorithme d'apprentissage semi-automatique, guidé par le concepteur du diagnostic grâce à une ontologie décrivant les traces et les connaissances du domaine d'apprentissage. L'assistance à la comparaison se fait par application d'un ensemble de critères de comparaison (statistiques ou spécifiques aux EIAH) sur les résultats des différents diagnostics construits. La principale contribution au domaine est la généricité de notre méthode, applicable à un ensemble de diagnostics différents pour tout domaine d'apprentissage. Nous évaluons notre travail à travers trois expérimentations. La première porte sur l'application de la méthode à trois domaines différents (géométrie, lecture, chirurgie) en utilisant des jeux de traces en validation croisée pour construire et appliquer les critères de comparaison sur cinq diagnostics différents. La seconde expérimentation porte sur la spécification et l'implémentation d'un nouveau critère de comparaison spécifique aux EIAH : la comparaison des diagnostics en fonction de leur impact sur une prise de décision de l'EIAH, le choix d'un type d'aide à donner à l'apprenant. La troisième expérimentation traite de la spécification et de l'ajout d'un nouveau diagnostic dans notre plateforme, en collaborant avec une didacticienne
    corecore