63 research outputs found

    A Multidimensional Deep Learner Model of Urgent Instructor Intervention Need in MOOC Forum Posts

    Get PDF
    In recent years, massive open online courses (MOOCs) have become one of the most exciting innovations in e-learning environments. Thousands of learners around the world enroll on these online platforms to satisfy their learning needs (mostly) free of charge. However, despite the advantages MOOCs offer learners, dropout rates are high. Struggling learners often describe their feelings of confusion and need for help via forum posts. However, the often-huge numbers of posts on forums make it unlikely that instructors can respond to all learners and many of these urgent posts are overlooked or discarded. To overcome this, mining raw data for learners’ posts may provide a helpful way of classifying posts where learners require urgent intervention from instructors, to help learners and reduce the current high dropout rates. In this paper we propose, a method based on correlations of different dimensions of learners’ posts to determine the need for urgent intervention. Our initial statistical analysis found some interesting significant correlations between posts expressing sentiment, confusion, opinion, questions, and answers and the need for urgent intervention. Thus, we have developed a multidimensional deep learner model combining these features with natural language processing (NLP). To illustrate our method, we used a benchmark dataset of 29598 posts, from three different academic subject areas. The findings highlight that the combined, multi-dimensional features model is more effective than the text-only (NLP) analysis, showing that future models need to be optimised based on all these dimensions, when classifying urgent posts

    Predicting the Need for Urgent Instructor Intervention in MOOC Environments

    Get PDF
    In recent years, massive open online courses (MOOCs) have become universal knowledge resources and arguably one of the most exciting innovations in e-learning environments. MOOC platforms comprise numerous courses covering a wide range of subjects and domains. Thousands of learners around the world enrol on these online platforms to satisfy their learning needs (mostly) free of charge. However, the retention rates of MOOC courses (i.e., those who successfully complete a course of study) are low (around 10% on average); dropout rates tend to be very high (around 90%). The principal channel via which MOOC learners can communicate their difficulties with the learning content and ask for assistance from instructors is by posting in a dedicated MOOC forum. Importantly, in the case of learners who are suffering from burnout or stress, some of these posts require urgent intervention. Given the above, urgent instructor intervention regarding learner requests for assistance via posts made on MOOC forums has become an important topic for research among researchers. Timely intervention by MOOC instructors may mitigate dropout issues and make the difference between a learner dropping out or staying on a course. However, due to the typically extremely high learner-to-instructor ratio in MOOCs and the often-huge numbers of posts on forums, while truly urgent posts are rare, managing them can be very challenging –– if not sometimes impossible. Instructors can find it challenging to monitor all existing posts and identify which posts require immediate intervention to help learners, encourage retention, and reduce the current high dropout rates. The main objective of this research project, therefore, was thus to mine and analyse learners’ MOOC posts as a fundamental step towards understanding their need for instructor intervention. To achieve this, the researcher proposed and built comprehensive classification models to predict the need for instructor intervention. The ultimate goal is to help instructors by guiding them to posts, topics, and learners that require immediate interventions. Given the above research aim the researcher conducted different experiments to fill the gap in literature based on different platform datasets (the FutureLearn platform and the Stanford MOOCPosts dataset) in terms of the former, three MOOC corpora were prepared: two of them gold-standard MOOC corpora to identify urgent posts, annotated by selected experts in the field; the third is a corpus detailing learner dropout. Based in these datasets, different architectures and classification models based on traditional machine learning, and deep learning approaches were proposed. In this thesis, the task of determining the need for instructor intervention was tackled from three perspectives: (i) identifying relevant posts, (ii) identifying relevant topics, and (iii) identifying relevant learners. Posts written by learners were classified into two categories: (i) (urgent) intervention and (ii) (non-urgent) intervention. Also, learners were classified into: (i) requiring instructor intervention (at risk of dropout) and (ii) no need for instructor intervention (completer). In identifying posts, two experiments were used to contribute to this field. The first is a novel classifier based on a deep learning model that integrates novel MOOC post dimensions such as numerical data in addition to textual data; this represents a novel contribution to the literature as all available models at the time of writing were based on text-only. The results demonstrate that the combined, multidimensional features model proposed in this project is more effective than the text-only model. The second contribution relates to creating various simple and hybrid deep learning models by applying plug & play techniques with different types of inputs (word-based or word-character-based) and different ways of representing target input words as vector representations of a particular word. According to the experimental findings, employing Bidirectional Encoder Representations from Transformers (BERT) for word embedding rather than word2vec as the former is more effective at the intervention task than the latter across all models. Interestingly, adding word-character inputs with BERT does not improve performance as it does for word2vec. Additionally, on the task of identifying topics, this is the first time in the literature that specific language terms to identify the need for urgent intervention in MOOCs were obtained. This was achieved by analysing learner MOOC posts using latent Dirichlet allocation (LDA) and offers a visualisation tool for instructors or learners that may assist them and improve instructor intervention. In addition, this thesis contributes to the literature by creating mechanisms for identifying MOOC learners who may need instructor intervention in a new context, i.e., by using their historical online forum posts as a multi-input approach for other deep learning architectures and Transformer models. The findings demonstrate that using the Transformer model is more effective at identifying MOOC learners who require instructor intervention. Next, the thesis sought to expand its methodology to identify posts that relate to learner behaviour, which is also a novel contribution, by proposing a novel priority model to identify the urgency of intervention building based on learner histories. This model can classify learners into three groups: low risk, mid risk, and high risk. The results show that the completion rates of high-risk learners are very low, which confirms the importance of this model. Next, as MOOC data in terms of urgent posts tend to be highly unbalanced, the thesis contributes by examining various data balancing methods to spot situations in which MOOC posts urgently require instructor assistance. This included developing learner and instructor models to assist instructors to respond to urgent MOOCs posts. The results show that models with undersampling can predict the most urgent cases; 3x augmentation + undersampling usually attains the best performance. Finally, for the first time, this thesis contributes to the literature by applying text classification explainability (eXplainable Artificial Intelligence (XAI)) to an instructor intervention model, demonstrating how using a reliable predictor in combination with XAI and colour-coded visualisation could be utilised to assist instructors in deciding when posts require urgent intervention, as well as supporting annotators to create high-quality, gold-standard datasets to determine posts cases where urgent intervention is required

    Intervention Prediction in MOOCs Based on Learners’ Comments: A Temporal Multi-input Approach Using Deep Learning and Transformer Models

    Get PDF
    High learner dropout rates in MOOC-based education contexts have encouraged researchers to explore and propose different intervention models. In discussion forums, intervention is critical, not only to identify comments that require replies but also to consider learners who may require intervention in the form of staff support. There is a lack of research on the role of intervention based on learner comments to prevent learner dropout in MOOC-based settings. To fill this research gap, we propose an intervention model that detects when staff intervention is required to prevent learner dropout using a dataset from FutureLearn. Our proposed model was based on learners’ comments history by integrating the most-recent sequence of comments written by learners to identify if an intervention was necessary to prevent dropout. We aimed to find both the proper classifier and the number of comments representing the appropriate most recent sequence of comments. We developed several intervention models by utilising two forms of supervised multi-input machine learning (ML) classification models (deep learning and transformer). For the transformer model, specifically, we propose the siamese and dual temporal multi-input, which we term the multi-siamese BERT and multiple BERT. We further experimented with clustering learners based on their respective number of comments to analyse if grouping as a pre-processing step improved the results. The results show that, whilst multi-input for deep learning can be useful, a better overall effect is achieved by using the transformer model, which has better performance in detecting learners who require intervention. Contrary to our expectations, however, clustering before prediction can have negative consequences on prediction outcomes, especially in the underrepresented class

    Predicting Paid Certification in Massive Open Online Courses

    Get PDF
    Massive open online courses (MOOCs) have been proliferating because of the free or low-cost offering of content for learners, attracting the attention of many stakeholders across the entire educational landscape. Since 2012, coined as “the Year of the MOOCs”, several platforms have gathered millions of learners in just a decade. Nevertheless, the certification rate of both free and paid courses has been low, and only about 4.5–13% and 1–3%, respectively, of the total number of enrolled learners obtain a certificate at the end of their courses. Still, most research concentrates on completion, ignoring the certification problem, and especially its financial aspects. Thus, the research described in the present thesis aimed to investigate paid certification in MOOCs, for the first time, in a comprehensive way, and as early as the first week of the course, by exploring its various levels. First, the latent correlation between learner activities and their paid certification decisions was examined by (1) statistically comparing the activities of non-paying learners with course purchasers and (2) predicting paid certification using different machine learning (ML) techniques. Our temporal (weekly) analysis showed statistical significance at various levels when comparing the activities of non-paying learners with those of the certificate purchasers across the five courses analysed. Furthermore, we used the learner’s activities (number of step accesses, attempts, correct and wrong answers, and time spent on learning steps) to build our paid certification predictor, which achieved promising balanced accuracies (BAs), ranging from 0.77 to 0.95. Having employed simple predictions based on a few clickstream variables, we then analysed more in-depth what other information can be extracted from MOOC interaction (namely discussion forums) for paid certification prediction. However, to better explore the learners’ discussion forums, we built, as an original contribution, MOOCSent, a cross- platform review-based sentiment classifier, using over 1.2 million MOOC sentiment-labelled reviews. MOOCSent addresses various limitations of the current sentiment classifiers including (1) using one single source of data (previous literature on sentiment classification in MOOCs was based on single platforms only, and hence less generalisable, with relatively low number of instances compared to our obtained dataset;) (2) lower model outputs, where most of the current models are based on 2-polar iii iv classifier (positive or negative only); (3) disregarding important sentiment indicators, such as emojis and emoticons, during text embedding; and (4) reporting average performance metrics only, preventing the evaluation of model performance at the level of class (sentiment). Finally, and with the help of MOOCSent, we used the learners’ discussion forums to predict paid certification after annotating learners’ comments and replies with the sentiment using MOOCSent. This multi-input model contains raw data (learner textual inputs), sentiment classification generated by MOOCSent, computed features (number of likes received for each textual input), and several features extracted from the texts (character counts, word counts, and part of speech (POS) tags for each textual instance). This experiment adopted various deep predictive approaches – specifically that allow multi-input architecture - to early (i.e., weekly) investigate if data obtained from MOOC learners’ interaction in discussion forums can predict learners’ purchase decisions (certification). Considering the staggeringly low rate of paid certification in MOOCs, this present thesis contributes to the knowledge and field of MOOC learner analytics with predicting paid certification, for the first time, at such a comprehensive (with data from over 200 thousand learners from 5 different discipline courses), actionable (analysing learners decision from the first week of the course) and longitudinal (with 23 runs from 2013 to 2017) scale. The present thesis contributes with (1) investigating various conventional and deep ML approaches for predicting paid certification in MOOCs using learner clickstreams (Chapter 5) and course discussion forums (Chapter 7), (2) building the largest MOOC sentiment classifier (MOOCSent) based on learners’ reviews of the courses from the leading MOOC platforms, namely Coursera, FutureLearn and Udemy, and handles emojis and emoticons using dedicated lexicons that contain over three thousand corresponding explanatory words/phrases, (3) proposing and developing, for the first time, multi-input model for predicting certification based on the data from discussion forums which synchronously processes the textual (comments and replies) and numerical (number of likes posted and received, sentiments) data from the forums, adapting the suitable classifier for each type of data as explained in detail in Chapter 7

    A Brief Survey of Deep Learning Approaches for Learning Analytics on MOOCs

    Get PDF
    Massive Open Online Course (MOOC) systems have become prevalent in recent years and draw more attention, a.o., due to the coronavirus pandemic’s impact. However, there is a well-known higher chance of dropout from MOOCs than from conventional off-line courses. Researchers have implemented extensive methods to explore the reasons behind learner attrition or lack of interest to apply timely interventions. The recent success of neural networks has revolutionised extensive Learning Analytics (LA) tasks. More recently, the associated deep learning techniques are increasingly deployed to address the dropout prediction problem. This survey gives a timely and succinct overview of deep learning techniques for MOOCs’ learning analytics. We mainly analyse the trends of feature processing and the model design in dropout prediction, respectively. Moreover, the recent incremental improvements over existing deep learning techniques and the commonly used public data sets have been presented. Finally, the paper proposes three future research directions in the field: knowledge graphs with learning analytics, comprehensive social network analysis, composite behavioural analysis

    Latent deep sequential learning of behavioural sequences

    Get PDF
    The growing use of asynchronous online education (MOOCs and e-courses) in recent years has resulted in increased economic and scientific productivity, which has worsened during the coronavirus epidemic. The widespread usage of OLEs has increased enrolment, including previously excluded students, resulting in a far higher dropout rate than in conventional classrooms. Dropouts are a significant problem, especially considering the rising proliferation of online courses, from individual MOOCs to whole academic programmes due to the pandemic. Increased efficiency in dropout prevention techniques is vital for institutions, students, and faculty members and must be prioritised. In response to the resurgence of interest in the student dropout prediction (SDP) issue, there has been a significant rise in contributions to the literature on this topic. An in-depth review of the current state of the art literature on SDP is provided, with a special emphasis on Machine Learning prediction approaches; however, this is not the only focus of the thesis. We propose a complete hierarchical categorisation of the current literature that correlates to the process of design decisions in the SDP, and we demonstrate how it may be implemented. In order to enable comparative analysis, we develop a formal notation for universally defining the multiple dropout models examined by scholars in the area, including online degrees and their attributes. We look at several other important factors that have received less attention in the literature, such as evaluation metrics, acquired data, and privacy concerns. We emphasise deep sequential machine learning approaches and are considered to be one of the most successful solutions available in this field of study. Most importantly, we present a novel technique - namely GRU-AE - for tackling the SDP problem using hidden spatial information and time-related data from student trajectories. Our method is capable of dealing with data imbalances and time-series sparsity challenges. The proposed technique outperforms current methods in various situations, including the complex scenario of full-length courses (such as online degrees). This situation was thought to be less common before the outbreak, but it is now deemed important. Finally, we extend our findings to different contexts with a similar characterisation (temporal sequences of behavioural labels). Specifically, we show that our technique can be used in real-world circumstances where the unbalanced nature of the data can be mitigated by using class balancement technique (i.e. ADASYN), e.g., survival prediction in critical care telehealth systems where balancement technique alleviates the problem of inter-activity reliance and sparsity, resulting in an overall improvement in performance

    Urgency Analysis of Learners’ Comments: An Automated Intervention Priority Model for MOOC

    Get PDF
    Recently, the growing number of learners in Massive Open Online Course (MOOC) environments generate a vast amount of online comments via social interactions, general discussions, expressing feelings or asking for help. Concomitantly, learner dropout, at any time during MOOC courses, is very high, whilst the number of learners completing (completers) is low. Urgent intervention and attention may alleviate this problem. Analysing and mining learner comments is a fundamental step towards understanding their need for intervention from instructors. Here, we explore a dataset from a FutureLearn MOOC course. We find that (1) learners who write many comments that need urgent intervention tend to write many comments, in general. (2) The motivation to access more steps (i.e., learning resources) is higher in learners without many comments needing intervention, than that of learners needing intervention. (3) Learners who have many comments that need intervention are less likely to complete the course (13%). Therefore, we propose a new priority model for the urgency of intervention built on learner histories – past urgency, sentiment analysis and step access

    Solving the imbalanced data issue: automatic urgency detection for instructor assistance in MOOC discussion forums

    Get PDF
    In MOOCs, identifying urgent comments on discussion forums is an ongoing challenge. Whilst urgent comments require immediate reactions from instructors, to improve interaction with their learners, and potentially reducing drop-out rates—the task is difficult, as truly urgent comments are rare. From a data analytics perspective, this represents a highly unbalanced (sparse) dataset. Here, we aim to automate the urgent comments identification process, based on fine-grained learner modelling—to be used for automatic recommendations to instructors. To showcase and compare these models, we apply them to the first gold standard dataset for Urgent iNstructor InTErvention (UNITE), which we created by labelling FutureLearn MOOC data. We implement both benchmark shallow classifiers and deep learning. Importantly, we not only compare, for the first time for the unbalanced problem, several data balancing techniques, comprising text augmentation, text augmentation with undersampling, and undersampling, but also propose several new pipelines for combining different augmenters for text augmentation. Results show that models with undersampling can predict most urgent cases; and 3X augmentation + undersampling usually attains the best performance. We additionally validate the best models via a generic benchmark dataset (Stanford). As a case study, we showcase how the naïve Bayes with count vector can adaptively support instructors in answering learner questions/comments, potentially saving time or increasing efficiency in supporting learners. Finally, we show that the errors from the classifier mirrors the disagreements between annotators. Thus, our proposed algorithms perform at least as well as a ‘super-diligent’ human instructor (with the time to consider all comments)
    • 

    corecore