641 research outputs found

    Predicting Learners' Success in a Self-paced MOOC Through Sequence Patterns of Self-regulated Learning

    Get PDF
    Proceeding of: 13th European Conference on Technology Enhanced Learning, EC-TEL 2018, Leeds, UK, September 3-5, 2018.In the past years, predictive models in Massive Open Online Courses (MOOCs) have focused on forecasting learners' success through their grades. The prediction of these grades is useful to identify problems that might lead to dropouts. However, most models in prior work predict categorical and continuous variables using low-level data. This paper contributes to extend current predictive models in the literature by considering coarse-grained variables related to Self-Regulated Learning (SRL). That is, using learners' self-reported SRL strategies and MOOC activity sequence patterns as predictors. Lineal and logistic regression modelling were used as a first approach of prediction with data collected from N = 2,035 learners who took a self-paced MOOC in Coursera. We identified two groups of learners: (1) Comprehensive, who follow the course path designed by the teacher; and (2) Targeting, who seek for the information required to pass assessments. For both type of learners, we found a group of variables as the most predictive: (1) the self-reported SRL strategies 'goal setting', 'strategic planning', 'elaboration' and 'help seeking'; (2) the activity sequences patterns 'only assessment', 'complete a video-lecture and try an assessment', 'explore the content' and 'try an assessment followed by a video-lecture'; and (3) learners' prior experience, together with the self-reported interest in course assessments, and the number of active days and time spent in the platform. These results show how to predict with more accuracy when students reach a certain status taking in to consideration not only low-level data, but complex data such as their SRL strategies.This work was supported by FONDECYT (Chile) under project initiation grant No.11150231, the MOOC-Maker Project (561533-EPP-1-2015-1-ES-EPPKA2-CBHE-JP), the LALA Project (586120-EPP-1-2017-1-ES-EPPKA2-CBHE-JP), and CONICYT/DOCTORADO NACIONAL 2016/21160081, the Spanish Ministry of Education, Culture and Sport, under an FPU fellowship (FPU016/00526) and the Spanish Ministry of Economy and Competiveness (Smartlet project, grant number TIN2017-85179-C3-1-R) funded by the Agencia Estatal de Investigación (AEI) and Fondo Europeo de Desarrollo Regional (FEDER).Publicad

    Online environments for supporting learning analytics in the flipped classroom:a scoping review

    Get PDF

    Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

    Full text link
    Deep learning models for learning analytics have become increasingly popular over the last few years; however, these approaches are still not widely adopted in real-world settings, likely due to a lack of trust and transparency. In this paper, we tackle this issue by implementing explainable AI methods for black-box neural networks. This work focuses on the context of online and blended learning and the use case of student success prediction models. We use a pairwise study design, enabling us to investigate controlled differences between pairs of courses. Our analyses cover five course pairs that differ in one educationally relevant aspect and two popular instance-based explainable AI methods (LIME and SHAP). We quantitatively compare the distances between the explanations across courses and methods. We then validate the explanations of LIME and SHAP with 26 semi-structured interviews of university-level educators regarding which features they believe contribute most to student success, which explanations they trust most, and how they could transform these insights into actionable course design decisions. Our results show that quantitatively, explainers significantly disagree with each other about what is important, and qualitatively, experts themselves do not agree on which explanations are most trustworthy. All code, extended results, and the interview protocol are provided at https://github.com/epfl-ml4ed/trusting-explainers.Comment: Accepted as a full paper at LAK 2023: The 13th International Learning Analytics and Knowledge Conference, March 13-17, 2023, Arlington, Texas, US

    Learning Analytics in Flipped Classrooms:a Scoping Review

    Get PDF

    Using Network-Text Analysis to Characterise Learner Engagement in Active Video Watching

    Get PDF
    Video is becoming more and more popular as a learning medium in a variety of educational settings, ranging from flipped classrooms to MOOCs to informal learning. The prevailing educational usage of videos is based on watching prepared videos, which calls for accompanying video usage with activities to promote constructive learning. In the Active Video Watching (AVW) approach, learner engagement during video watching is induced via interactive notetaking, similar to video commenting in social video-sharing platforms. This coincides with the JuxtaLearn practice, in which student-created videos were shared on a social networking platform and commented by other students. Drawing on the experience of both AVW and JuxtaLearn, we combine and refine analysis techniques to characterise learner engagement. The approach draws on network-text analysis of learner-generated comments as a basis. This allows for capturing pedagogically relevant aspects of divergence, convergence and (dis-) continuity in textual commenting behaviour related to different learner types. The lexical-semantic analytics approach using learner-generated artefacts provides deep insights into learner engagement. This has broader application in video-based learning environments

    The engage taxonomy: SDT-based measurable engagement indicators for MOOCs and their evaluation

    Get PDF
    Massive Online Open Course (MOOC) platforms are considered a distinctive way to deliver a modern educational experience, open to a worldwide public. However, student engagement in MOOCs is a less explored area, although it is known that MOOCs suffer from one of the highest dropout rates within learning environments in general, and in e-learning in particular. A special challenge in this area is finding early, measurable indicators of engagement. This paper tackles this issue with a unique blend of data analytics and NLP and machine learning techniques together with a solid foundation in psychological theories. Importantly, we show for the first time how Self-Determination Theory (SDT) can be mapped onto concrete features extracted from tracking student behaviour on MOOCs. We map the dimensions of Autonomy, Relatedness and Competence, leading to methods to characterise engaged and disengaged MOOC student behaviours, and exploring what triggers and promotes MOOC students’ interest and engagement. The paper further contributes by building the Engage Taxonomy, the first taxonomy of MOOC engagement tracking parameters, mapped over 4 engagement theories: SDT, Drive, ET, Process of Engagement. Moreover, we define and analyse students’ engagement tracking, with a larger than usual body of content (6 MOOC courses from two different universities with 26 runs spanning between 2013 and 2018) and students (initially around 218.235). Importantly, the paper also serves as the first large-scale evaluation of the SDT theory itself, providing a blueprint for large-scale theory evaluation. It also provides for the first-time metrics for measurable engagement in MOOCs, including specific measures for Autonomy, Relatedness and Competence; it evaluates these based on existing (and expanded) measures of success in MOOCs: Completion rate, Correct Answer ratio and Reply ratio. In addition, to further illustrate the use of the proposed SDT metrics, this study is the first to use SDT constructs extracted from the first week, to predict active and non-active students in the following week
    • …
    corecore