166 research outputs found

    Dropout Model Evaluation in MOOCs

    Full text link
    The field of learning analytics needs to adopt a more rigorous approach for predictive model evaluation that matches the complex practice of model-building. In this work, we present a procedure to statistically test hypotheses about model performance which goes beyond the state-of-the-practice in the community to analyze both algorithms and feature extraction methods from raw data. We apply this method to a series of algorithms and feature sets derived from a large sample of Massive Open Online Courses (MOOCs). While a complete comparison of all potential modeling approaches is beyond the scope of this paper, we show that this approach reveals a large gap in dropout prediction performance between forum-, assignment-, and clickstream-based feature extraction methods, where the latter is significantly better than the former two, which are in turn indistinguishable from one another. This work has methodological implications for evaluating predictive or AI-based models of student success, and practical implications for the design and targeting of at-risk student models and interventions

    Intervention Prediction in MOOCs Based on Learners’ Comments: A Temporal Multi-input Approach Using Deep Learning and Transformer Models

    Get PDF
    High learner dropout rates in MOOC-based education contexts have encouraged researchers to explore and propose different intervention models. In discussion forums, intervention is critical, not only to identify comments that require replies but also to consider learners who may require intervention in the form of staff support. There is a lack of research on the role of intervention based on learner comments to prevent learner dropout in MOOC-based settings. To fill this research gap, we propose an intervention model that detects when staff intervention is required to prevent learner dropout using a dataset from FutureLearn. Our proposed model was based on learners’ comments history by integrating the most-recent sequence of comments written by learners to identify if an intervention was necessary to prevent dropout. We aimed to find both the proper classifier and the number of comments representing the appropriate most recent sequence of comments. We developed several intervention models by utilising two forms of supervised multi-input machine learning (ML) classification models (deep learning and transformer). For the transformer model, specifically, we propose the siamese and dual temporal multi-input, which we term the multi-siamese BERT and multiple BERT. We further experimented with clustering learners based on their respective number of comments to analyse if grouping as a pre-processing step improved the results. The results show that, whilst multi-input for deep learning can be useful, a better overall effect is achieved by using the transformer model, which has better performance in detecting learners who require intervention. Contrary to our expectations, however, clustering before prediction can have negative consequences on prediction outcomes, especially in the underrepresented class

    A Multidimensional Deep Learner Model of Urgent Instructor Intervention Need in MOOC Forum Posts

    Get PDF
    In recent years, massive open online courses (MOOCs) have become one of the most exciting innovations in e-learning environments. Thousands of learners around the world enroll on these online platforms to satisfy their learning needs (mostly) free of charge. However, despite the advantages MOOCs offer learners, dropout rates are high. Struggling learners often describe their feelings of confusion and need for help via forum posts. However, the often-huge numbers of posts on forums make it unlikely that instructors can respond to all learners and many of these urgent posts are overlooked or discarded. To overcome this, mining raw data for learners’ posts may provide a helpful way of classifying posts where learners require urgent intervention from instructors, to help learners and reduce the current high dropout rates. In this paper we propose, a method based on correlations of different dimensions of learners’ posts to determine the need for urgent intervention. Our initial statistical analysis found some interesting significant correlations between posts expressing sentiment, confusion, opinion, questions, and answers and the need for urgent intervention. Thus, we have developed a multidimensional deep learner model combining these features with natural language processing (NLP). To illustrate our method, we used a benchmark dataset of 29598 posts, from three different academic subject areas. The findings highlight that the combined, multi-dimensional features model is more effective than the text-only (NLP) analysis, showing that future models need to be optimised based on all these dimensions, when classifying urgent posts

    A Brief Survey of Deep Learning Approaches for Learning Analytics on MOOCs

    Get PDF
    Massive Open Online Course (MOOC) systems have become prevalent in recent years and draw more attention, a.o., due to the coronavirus pandemic’s impact. However, there is a well-known higher chance of dropout from MOOCs than from conventional off-line courses. Researchers have implemented extensive methods to explore the reasons behind learner attrition or lack of interest to apply timely interventions. The recent success of neural networks has revolutionised extensive Learning Analytics (LA) tasks. More recently, the associated deep learning techniques are increasingly deployed to address the dropout prediction problem. This survey gives a timely and succinct overview of deep learning techniques for MOOCs’ learning analytics. We mainly analyse the trends of feature processing and the model design in dropout prediction, respectively. Moreover, the recent incremental improvements over existing deep learning techniques and the commonly used public data sets have been presented. Finally, the paper proposes three future research directions in the field: knowledge graphs with learning analytics, comprehensive social network analysis, composite behavioural analysis

    Analyzing Learners Behavior in MOOCs: An Examination of Performance and Motivation Using a Data-Driven Approach

    Get PDF
    Massive Open Online Courses (MOOCs) have been experiencing increasing use and popularity in highly ranked universities in recent years. The opportunity of accessing high quality courseware content within such platforms, while eliminating the burden of educational, financial and geographical obstacles has led to a rapid growth in participant numbers. The increasing number and diversity of participating learners has opened up new horizons to the research community for the investigation of effective learning environments. Learning Analytics has been used to investigate the impact of engagement on student performance. However, extensive literature review indicates that there is little research on the impact of MOOCs, particularly in analyzing the link between behavioral engagement and motivation as predictors of learning outcomes. In this study, we consider a dataset, which originates from online courses provided by Harvard University and Massachusetts Institute of Technology, delivered through the edX platform [1]. Two sets of empirical experiments are conducted using both statistical and machine learning techniques. Statistical methods are used to examine the association between engagement level and performance, including the consideration of learner educational backgrounds. The results indicate a significant gap between success and failure outcome learner groups, where successful learners are found to read and watch course material to a higher degree. Machine learning algorithms are used to automatically detect learners who are lacking in motivation at an early time in the course, thus providing instructors with insight in regards to student withdrawal

    Predicting Student Success in a Self-Paced Mathematics MOOC

    Get PDF
    abstract: While predicting completion in Massive Open Online Courses (MOOCs) has been an active area of research in recent years, predicting completion in self-paced MOOCS, the fastest growing segment of open online courses, has largely been ignored. Using learning analytics and educational data mining techniques, this study examined data generated by over 4,600 individuals working in a self-paced, open enrollment college algebra MOOC over a period of eight months. Although just 4% of these students completed the course, models were developed that could predict correctly nearly 80% of the time which students would complete the course and which would not, based on each student’s first day of work in the online course. Logistic regression was used as the primary tool to predict completion and focused on variables associated with self-regulated learning (SRL) and demographic variables available from survey information gathered as students begin edX courses (the MOOC platform employed). The strongest SRL predictor was the amount of time students spent in the course on their first day. The number of math skills obtained the first day and the pace at which these skills were gained were also predictors, although pace was negatively correlated with completion. Prediction models using only SRL data obtained on the first day in the course correctly predicted course completion 70% of the time, whereas models based on first-day SRL and demographic data made correct predictions 79% of the time.Dissertation/ThesisDoctoral Dissertation Educational Technology 201

    How learners’ interactions sustain engagement: a MOOC case study

    No full text
    In 2015, 35 million learners participated online in 4,200 MOOCs organised by over 500 universities. Learning designers orchestrate MOOC content to engage learners at scale and retain interest by carefully mixing videos, lectures, readings, quizzes, and discussions. Universally, far fewer people actually participate in MOOCs than originally sign up with a steady attrition as courses progress. Studies have correlated social engagement to completion rates. The FutureLearn MOOC platform specifically provides opportunities to share opinions and to reflect by posting comments, replying, or following discussion threads. This paper investigates learners’ social behaviours in MOOCs and the impact of engagement on course completion. A preliminary study suggested that dropout rates will be lower when learners engage in repeated and frequent social interactions. We subsequently reviewed the literature of prediction models and applied social network analysis techniques to characterise participants’ online interactions examining implications for participant achievements. We analysed discussions in an eight week FutureLearn MOOC, with 9855 enrolled learners. Findings indicate that if learners starts following someone, the probability of their finishing the course is increased; if learners also interact with those they follow, they are highly likely to complete, both important factors to add to the prediction of completion model
    • …
    corecore