682 research outputs found

    Towards Feasible Instructor Intervention in MOOC discussion forums

    Get PDF
    Massive Open Online Courses allow numerous people from around the world to have access to knowledge that they otherwise have not. However, high student-to-instructor ratio in MOOCs restricts instructors’ ability to facilitate student learning by intervening in discussions forums, as they do in face-to-face classrooms. Instructors need automated guidance on when and how to intervene in discussion forums. Using a typology of pedagogical interventions derived from prior research, we annotate a large corpus of discussion forum contents to enable supervised machine learning to automatically identify interventions that promote student learning. Such machine learning models may allow building of dashboards to automatically prompt instructors on when and how to intervene in discussion forums. In the longer term, it may be possible to automate these interventions relieving instructors of this effort. Such automated approaches are essential for allowing good pedagogical practices to scale in the context of MOOC discussion forums

    Urgency Analysis of Learners’ Comments: An Automated Intervention Priority Model for MOOC

    Get PDF
    Recently, the growing number of learners in Massive Open Online Course (MOOC) environments generate a vast amount of online comments via social interactions, general discussions, expressing feelings or asking for help. Concomitantly, learner dropout, at any time during MOOC courses, is very high, whilst the number of learners completing (completers) is low. Urgent intervention and attention may alleviate this problem. Analysing and mining learner comments is a fundamental step towards understanding their need for intervention from instructors. Here, we explore a dataset from a FutureLearn MOOC course. We find that (1) learners who write many comments that need urgent intervention tend to write many comments, in general. (2) The motivation to access more steps (i.e., learning resources) is higher in learners without many comments needing intervention, than that of learners needing intervention. (3) Learners who have many comments that need intervention are less likely to complete the course (13%). Therefore, we propose a new priority model for the urgency of intervention built on learner histories – past urgency, sentiment analysis and step access

    Solving the imbalanced data issue: automatic urgency detection for instructor assistance in MOOC discussion forums

    Get PDF
    In MOOCs, identifying urgent comments on discussion forums is an ongoing challenge. Whilst urgent comments require immediate reactions from instructors, to improve interaction with their learners, and potentially reducing drop-out rates—the task is difficult, as truly urgent comments are rare. From a data analytics perspective, this represents a highly unbalanced (sparse) dataset. Here, we aim to automate the urgent comments identification process, based on fine-grained learner modelling—to be used for automatic recommendations to instructors. To showcase and compare these models, we apply them to the first gold standard dataset for Urgent iNstructor InTErvention (UNITE), which we created by labelling FutureLearn MOOC data. We implement both benchmark shallow classifiers and deep learning. Importantly, we not only compare, for the first time for the unbalanced problem, several data balancing techniques, comprising text augmentation, text augmentation with undersampling, and undersampling, but also propose several new pipelines for combining different augmenters for text augmentation. Results show that models with undersampling can predict most urgent cases; and 3X augmentation + undersampling usually attains the best performance. We additionally validate the best models via a generic benchmark dataset (Stanford). As a case study, we showcase how the naïve Bayes with count vector can adaptively support instructors in answering learner questions/comments, potentially saving time or increasing efficiency in supporting learners. Finally, we show that the errors from the classifier mirrors the disagreements between annotators. Thus, our proposed algorithms perform at least as well as a ‘super-diligent’ human instructor (with the time to consider all comments)

    Sentiment analysis in MOOCs: a case study

    Get PDF
    Proceeding of: 2018 IEEE Global Engineering Education Conference (EDUCON2018), 17-20 April, 2018, Santa Cruz de Tenerife, Canary Islands, Spain.Forum messages in MOOCs (Massive Open Online Courses) are the most important source of information about the social interactions happening in these courses. Forum messages can be analyzed to detect patterns and learners' behaviors. Particularly, sentiment analysis (e.g., classification in positive and negative messages) can be used as a first step for identifying complex emotions, such as excitement, frustration or boredom. The aim of this work is to compare different machine learning algorithms for sentiment analysis, using a real case study to check how the results can provide information about learners' emotions or patterns in the MOOC. Both supervised and unsupervised (lexicon-based) algorithms were used for the sentiment analysis. The best approaches found were Random Forest and one lexicon based method, which used dictionaries of words. The analysis of the case study also showed an evolution of the positivity over time with the best moment at the beginning of the course and the worst near the deadlines of peer-review assessments.This work has been co-funded by the Madrid Regional Government, through the eMadrid Excellence Network (S2013/ICE-2715), by the European Commission through Erasmus+ projects MOOC-Maker (561533-EPP-1-2015-1-ESEPPKA2-CBHE-JP), SHEILA (562080-EPP-1-2015-1-BEEPPKA3-PI-FORWARD), and LALA (586120-EPP-1-2017-1-ES-EPPKA2-CBHE-JP), and by the Spanish Ministry of Economy and Competitiveness, projects SNOLA (TIN2015-71669-REDT), RESET (TIN2014-53199-C3-1-R) and Smartlet (TIN2017-85179-C3-1-R). The latter is financed by the State Research Agency in Spain (AEI) and the European Regional Development Fund (FEDER). It has also been supported by the Spanish Ministry of Education, Culture and Sport, under a FPU fellowship (FPU016/00526).Publicad

    Towards Student Engagement Analytics: Applying Machine Learning to Student Posts in Online Lecture Videos

    Get PDF
    The use of online learning environments in higher education is becoming ever more prevalent with the inception of MOOCs (Massive Open Online Courses) and the increase in online and flipped courses at universities. Although the online systems used to deliver course content make education more accessible, students often express frustration with the lack of assistance during online lecture videos. Instructors express concern that students are not engaging with the course material in online environments, and rely on affordances within these systems to figure out what students are doing. With many online learning environments storing log data about students usage of these systems, research into learning analytics, the measurement, collection, analysis, and reporting data about learning and their contexts, can help inform instructors about student learning in the online context. This thesis aims to lay the groundwork for learning analytics that provide instructors high-level student engagement data in online learning environments. Recent research has shown that instructors using these systems are concerned about their lack of awareness about student engagement, and educational psychology has shown that engagement is necessary for student success. Specifically, this thesis explores the feasibility of applying machine learning to categorize student posts by their level of engagement. These engagement categories are derived from the ICAP framework, which categorizes overt student behaviors into four tiers of engagement: Interactive, Constructive, Active, and Passive. Contributions include showing what natural language features are most indicative of engagement, exploring whether this machine learning method can be generalized to many courses, and using previous research to develop mockups of what analytics using data from this machine learning method might look like

    Video-related pedagogical strategies in massive open online courses: A systematic literature review

    Get PDF
    For engineers who work with rapidly changing technology in multi-disciplinary teams, massive open online courses (MOOCs) offer the unique ability to deliver free, convenient professional development by providing up-to-date information spanning a wide range of disciplines. However, the MOOC boom has not been without its criticisms; many question the effectiveness of MOOCs. In response, many research studies are being conducted across the world to explore the effectiveness of various pedagogical approaches in MOOCs for different stakeholders. As videos constitute one of the most prominent features of MOOCs, it is important to analyse the empirical evidence of best practices for MOOC videos. Through a systematic literature review, we identify a series of important considerations and actions for three groups: instructional teams, video production teams, and platform developers. Considerations include instructor actions, content design and navigation, video style and length, production quality, video annotation tools, viewing options, and embedded assessments
    • …
    corecore