497 research outputs found

    Log file analysis for disengagement detection in e-Learning environments

    Get PDF

    ANALYZING AND MODELING STUDENTS¿ BEHAVIORAL DYNAMICS IN CONFIDENCE-BASED ASSESSMENT

    Get PDF
    Confidence-based assessment is a two-dimensional assessment paradigm which considers the confidence or expectancy level a student has about the answer, to ascertain his/her actual knowledge. Several researchers have discussed the usefulness of this model over the traditional one-dimensional assessment approach, which takes the number of correctly answered questions as a sole parameter to calculate the test scores of a student. Additionally, some educational psychologists and theorists have found that confidence-based assessment has a positive impact on students\u2019 academic performance, knowledge retention, and metacognitive abilities of self-regulation and engagement depicted during a learning process. However, to the best of our knowledge, these findings are not exploited by the educational data mining community, aiming to exploit students (logged) data to investigate their performance and behavioral characteristics in order to enhance their performance outcomes and/or learning experiences. Engagement reflects a student\u2019s active participation in an ongoing task or process, that becomes even more important when students are interacting with a computer-based learning or assessment system. There is some evidence that students\u2019 online engagement (which is estimated through their behaviors while interacting with a learning/assessment environment) is also positively correlated with good performance scores. However, no data mining method to date has measured students engagement behaviors during confidence-based assessment. This Ph.D. research work aimed to identify, analyze, model and predict students\u2019 dynamic behaviors triggered by their progression in a computer-based assessment system, offering confidence-driven questions. The data was collected from two experimental studies conducted with undergraduate students who solved a number of problems during confidence-based assessment. In this thesis, we first addressed the challenge of identifying different parameters representing students\u2019 problem-solving behaviors that are positively correlated with confidence-based assessment. Next, we developed a novel scheme to classify students\u2019 problem-solving activities into engaged or disengaged behaviors using the three previously identified parameters namely: students\u2019 response correctness, confidence level, feedback seeking/no-seeking behavior. Our next challenge was to exploit the students\u2019 interactions recorded at the micro-level, i.e. event by event, by the computer-based assessment tools, to estimate their intended engagement behaviors during the assessment. We also observed that traditional non-mixture, first-order Markov chain is inadequate to capture students\u2019 evolving behaviors revealed from their interactions with a computer-based learning/assessment system. We, therefore, investigated mixture Markov models to map students trails of performed activities. However, the quality of the resultant Markov chains is critically dependent on the initialization of the algorithm, which is usually performed randomly. We proposed a new approach for initializing the Expectation-Maximization algorithm for multivariate categorical data we called K-EM. Our method achieved better prediction accuracy and convergence rate in contrast to two pre-existing algorithms when applied on two real datasets. This doctoral research work contributes to elevate the existing states of the educational research (i.e. theoretical aspect) and the educational data mining area (i.e. empirical aspect). The outcomes of this work pave the way to a framework for an adaptive confidence-based assessment system, contributing to one of the central components of Adaptive Learning, that is, personalized student models. The adaptive system can exploit data generated in a confidence-based assessment system, to model students\u2019 behavioral profiles and provide personalized feedback to improve students\u2019 confidence accuracy and knowledge by considering their behavioral dynamics

    Automated Gaze-Based Mind Wandering Detection during Computerized Learning in Classrooms

    Get PDF
    We investigate the use of commercial off-the-shelf (COTS) eye-trackers to automatically detect mind wandering—a phenomenon involving a shift in attention from task-related to task-unrelated thoughts—during computerized learning. Study 1 (N = 135 high-school students) tested the feasibility of COTS eye tracking while students learn biology with an intelligent tutoring system called GuruTutor in their classroom. We could successfully track eye gaze in 75% (both eyes tracked) and 95% (one eye tracked) of the cases for 85% of the sessions where gaze was successfully recorded. In Study 2, we used this data to build automated student-independent detectors of mind wandering, obtaining accuracies (mind wandering F1 = 0.59) substantially better than chance (F1 = 0.24). Study 3 investigated context-generalizability of mind wandering detectors, finding that models trained on data collected in a controlled laboratory more successfully generalized to the classroom than the reverse. Study 4 investigated gaze- and video- based mind wandering detection, finding that gaze-based detection was superior and multimodal detection yielded an improvement in limited circumstances. We tested live mind wandering detection on a new sample of 39 students in Study 5 and found that detection accuracy (mind wandering F1 = 0.40) was considerably above chance (F1 = 0.24), albeit lower than offline detection accuracy from Study 1 (F1 = 0.59), a finding attributable to handling of missing data. We discuss our next steps towards developing gaze-based attention-aware learning technologies to increase engagement and learning by combating mind wandering in classroom contexts

    Analyzing Learners Behavior in MOOCs: An Examination of Performance and Motivation Using a Data-Driven Approach

    Get PDF
    Massive Open Online Courses (MOOCs) have been experiencing increasing use and popularity in highly ranked universities in recent years. The opportunity of accessing high quality courseware content within such platforms, while eliminating the burden of educational, financial and geographical obstacles has led to a rapid growth in participant numbers. The increasing number and diversity of participating learners has opened up new horizons to the research community for the investigation of effective learning environments. Learning Analytics has been used to investigate the impact of engagement on student performance. However, extensive literature review indicates that there is little research on the impact of MOOCs, particularly in analyzing the link between behavioral engagement and motivation as predictors of learning outcomes. In this study, we consider a dataset, which originates from online courses provided by Harvard University and Massachusetts Institute of Technology, delivered through the edX platform [1]. Two sets of empirical experiments are conducted using both statistical and machine learning techniques. Statistical methods are used to examine the association between engagement level and performance, including the consideration of learner educational backgrounds. The results indicate a significant gap between success and failure outcome learner groups, where successful learners are found to read and watch course material to a higher degree. Machine learning algorithms are used to automatically detect learners who are lacking in motivation at an early time in the course, thus providing instructors with insight in regards to student withdrawal

    LAView: Learning Analytics Dashboard Towards Evidence-based Education

    Get PDF
    The 9th International Learning Analytics and Knowledge (LAK) Conference : March 4-8, 2019, Tempe, Arizona, USALearning analytics dashboards (LAD) have supported prior finds that visualizing learning behavior helps students to reflect on their learning. We developed LAViEW, a LAD that can be easily integrated with different learning environments through LTI. In this paper, we focus on the context of eBook-based learning and present an overview of the indicators of engagement that LAView visualizes. Its integrated email widget enables the teacher to directly send personalized feedbacks to selected cohorts of students, clustered by their engagement scores. These interventions and dashboard interactions are further tracked to extract evidence of learning

    Machine learning methods in predicting the student academic motivation

    Get PDF
    Academic motivation is closely related to academic performance. For educators, it is equally important to detect early students with a lack of academic motivation as it is to detect those with a high level of academic motivation. In endeavouring to develop a classification model for predicting student academic motivation based on their behaviour in learning management system (LMS) courses, this paper intends to establish links between the predicted student academic motivation and their behaviour in the LMS course. Students from all years at the Faculty of Education in Osijek participated in this research. Three machine learning classifiers (neural networks, decision trees, and support vector machines) were used. To establish whether a significant difference in the performance of models exists, a t-test of the difference in proportions was used. Although, all classifiers were successful, the neural network model was shown to be the most successful in detecting the student academic motivation based on their behaviour in LMS course

    EXPLICIT RULE LEARNING: A COGNITIVE TUTORIAL METHOD TO TRAIN USERS OF ARTIFICIAL INTELLIGENCE/MACHINE LEARNING SYSTEMS

    Get PDF
    Today’s intelligent software systems, such as Artificial Intelligence/Machine Learning systems, are sophisticated, complicated, sometimes complex systems. In order to effectively interact with these systems, novice users need to have a certain level of understanding. An awareness of a system’s underlying principles, rationale, logic, and goals can enhance the synergistic human-machine interaction. It also benefits the user to know when they can trust the systems’ output, and to discern boundary conditions that might change the output. The purpose of this research is to empirically test the viability of a Cognitive Tutorial approach, called Explicit Rule Learning. Several approaches have been used to train humans in intelligent software systems; one of them is exemplar-based training. Although there has been some success, depending on the structure of the system, there are limitations to exemplars, which oftentimes are post hoc and case-based. Explicit Rule Learning is a global and rule-based training method that incorporates exemplars, but goes beyond specific cases. It provides learners with rich, robust mental models and the ability to transfer the learned skills to novel, previously unencountered situations. Learners are given verbalizable, probabilistic if...then statements, supplemented with exemplars. This is followed up with a series of practice problems, to which learners respond and receive immediate feedback on their correctness. The expectation is that this method will result in a refined representation of the system’s underlying principles, and a richer and more robust mental model that will enable the learner to simulate future states. Preliminary research helped to evaluate and refine Explicit Rule Learning. The final study in this research applied Explicit Rule Learning to a more real-world system, autonomous driving. The mixed-method within-subject study used a more naturalistic environment. Participants were given training material using the Explicit Rule Learning method and were subsequently tested on their ability to predict the autonomous vehicle’s actions. The results indicate that the participants trained with the Explicit Rule Learning method were more proficient at predicting the autonomous vehicle’s actions. These results, together with the results of preceding studies indicate that Explicit Rule Learning is an effective method to accelerate the proficiency of learners of intelligent software systems. Explicit Rule Learning is a low-cost training intervention that can be adapted to many intelligent software systems, including the many types of AI/ML systems in today’s world
    • 

    corecore