417 research outputs found

    Informing the Design of Collaborative Activities in MOOCs using Actionable Predictions

    Get PDF
    With the aim of supporting instructional designers in setting up collaborative learning activities in MOOCs, this paper derives prediction models for student participation in group discussions. The salient feature of these models is that they are built using only data prior to the learning activity, and can thus provide actionable predictions, as opposed to post-hoc approaches common in the MOOC literature. Some learning design scenarios that make use of this actionable information are illustrated

    Generating actionable predictions regarding MOOC learners’ engagement in peer reviews

    Get PDF
    Producción CientíficaPeer review is one approach to facilitate formative feedback exchange in MOOCs; however, it is often undermined by low participation. To support effective implementation of peer reviews in MOOCs, this research work proposes several predictive models to accurately classify learners according to their expected engagement levels in an upcoming peer-review activity, which offers various pedagogical utilities (e.g. improving peer reviews and collaborative learning activities). Two approaches were used for training the models: in situ learning (in which an engagement indicator available at the time of the predictions is used as a proxy label to train a model within the same course) and transfer across courses (in which a model is trained using labels obtained from past course data). These techniques allowed producing predictions that are actionable by the instructor while the course still continues, which is not possible with post-hoc approaches requiring the use of true labels. According to the results, both transfer across courses and in situ learning approaches have produced predictions that were actionable yet as accurate as those obtained with cross validation, suggesting that they deserve further attention to create impact in MOOCs with real-world interventions. Potential pedagogical uses of the predictions were illustrated with several examples.European Union’s Horizon 2020 research and innovation programme (Marie Sklodowska-Curie grant 793317)Ministerio de Ciencia, Innovación y Universidades (projects TIN2017-85179-C3-2-R / TIN2014-53199-C3-2-R)Junta de Castilla y León (grant VA257P18)Comisión Europea (grant 588438-EPP-1-2017-1-EL-EPPKA2-KA

    Generating actionable predictions regarding MOOC learners' engagement in peer reviews

    Get PDF
    Peer review is one approach to facilitate formative feedback exchange in MOOCs; however, it is often undermined by low participation. To support effective implementation of peer reviews in MOOCs, this research work proposes several predictive models to accurately classify learners according to their expected engagement levels in an upcoming peer-review activity, which offers various pedagogical utilities (e.g. improving peer reviews and collaborative learning activities). Two approaches were used for training the models: in situ learning (in which an engagement indicator available at the time of the predictions is used as a proxy label to train a model within the same course) and transfer across courses (in which a model is trained using labels obtained from past course data). These techniques allowed producing predictions that are actionable by the instructor while the course still continues, which is not possible with post-hoc approaches requiring the use of true labels. According to the results, both transfer across courses and in situ learning approaches have produced predictions that were actionable yet as accurate as those obtained with cross validation, suggesting that they deserve further attention to create impact in MOOCs with real-world interventions. Potential pedagogical uses of the predictions were illustrated with several examples

    Characterizing Algorithmic Performance in Machine Learning for Education

    Get PDF
    The integration of artificial intelligence (AI) in educational systems has revolutionized the field of education, offering numerous benefits such as personalized learning, intelligent tutoring, and data-driven insights. However, alongside this progress, concerns have arisen about potential algorithmic disparities and performance issues in AI applications for education. This doctoral thesis addresses these concerns and aims to foster the development of AI in educational contexts that emphasize performance analysis. The thesis begins by investigating the challenges and needs of the educational community in integrating responsible practices into AI-based educational systems. Through surveys and interviews with experts in the field, real-world needs and common areas for developing more responsible AI in education are identified. According to our findings, further research delves into the analysis of student behavior in both synchronous and asynchronous learning environments. By examining patterns of student engagement and predicting student success, the thesis uncovers potential performance issues (e.g., unknown unknowns: the model is really confident of its predictions but actually wrong), emphasizing the need for nuanced approaches that consider hidden factors impacting students’ learning outcomes. By providing an integrated view of the performance analyses conducted in different learning environments, the thesis offers a comprehensive understanding of the challenges and opportunities in developing responsible AI applications for education. Ultimately, this doctoral thesis contributes to the advancement of responsible AI in education, offering insights into the complexities of algorithmic disparities and their implications. The research work presented herein serves as a guiding framework for designing and deploying AI enabled educational systems that prioritize responsibility, and improved learning experiences

    Learning analytics for the global south

    Get PDF
    Learning Analytics for the Global South is a compilation of papers commissioned for the Digital Learning for Development (DL4D) project. DL4D is part of the Information Networks in Asia and Sub-Saharan Africa (INASSA) program funded jointly by the International Development Research Centre (IDRC) of Canada and the Department for International Development (DFID) of the United Kingdom, and administered by the Foundation for Information Technology Education and Development (FIT-ED) of the Philippines. DL4D aims to examine how digital learning could be used to address issues of equity, quality, and efficiency at all educational levels in developing countries. Over the past two years, DL4D has brought together leading international and regional scholars and practitioners to critically assess the potentials, prospects, challenges, and future directions for the Global South in key areas of interest around digital learning. It commissioned discussion papers for each of these areas from leading experts in the field: Diana Laurillard of the University College London Knowledge Lab, for learning at scale; Chris Dede of Harvard University, for digital game-based learning; Charalambos Vrasidas of the Centre for the Advancement of Research and Development in Educational Technology, for cost-effective digital learning innovations; and for learning analytics, the subject of this compilation, Dragan Gašević of the University of Edinburgh Moray House School of Education and School of Informatics. Each discussion paper is complemented by responses from a developing country-perspective by regional experts in Asia, Latin America, Africa, and the Middle East. Learning Analytics for the Global South considers how the collection, analysis, and use of data about learners and their contexts have the potential to broaden access to quality education and improve the efficiency of educational processes and systems in developing countries around the world. In his discussion paper, Prof. Gašević articulates these potentials and suggests how learning analytics could support critical digital learning and education imperatives such as quality learning at scale and the acquisition of 21st century skills. Experts from Africa (Paul Prinsloo of the University of South Africa), Mainland China (Bodong Chen of the University of Minnesota, USA and Yizhou Fan of Peking University, People’s Republic of China), Southeast Asia (Ma. Mercedes T. Rodrigo of the Ateneo de Manila University, Philippines), and Latin America (Cristóbal Cobo and Cecilia Aguerrebere, both of the Ceibal Foundation, Uruguay) situate Prof. Gašević’s proposals in their respective regional contexts, framing their responses around six key questions: 1. What are the main trends and challenges in education in your region? 2. How can learning analytics address these challenges? 3. What models of learning analytics adoption would be most effective in your region? 4. What are the barriers in adoption of learning analytics in your region and how could these be mitigated? 5. How do you envision ethical use and privacy protection in connection with learning analytics being addressed in your region? 6. How can the operationalization of learning analytics be futureproofed in your region? We hope that this compilation will serve as a springboard for deeper conversations about the adoption and sustained use of learning analytics in developing countries – its potential benefits and risks for learners, educators, and educations systems, as well as the ways to move forward that are rigorous, context-appropriate, ethical, and accountable.This work was created with financial support from the UK Government’s Department for International Development and the International Development Research Centre, Canada. The views expressed in this work are those of the authors and do not necessarily represent those of the UK Government’s Department for International Development; the International Development Research Centre, Canada or its Board of Governors; the Foundation for Information Technology Education and Development; or the editors

    Educational visions: The lessons from 40 years of innovation

    Get PDF
    Educational Visions looks to future developments in educational technology by reviewing our history of computers and education, covering themes such as learning analytics and design, inquiry learning, citizen science, inclusion, and learning at scale. The book shows how successful innovations can be built over time, informs readers about current practice and demonstrates how they can use this work themselves. This book is intended for anyone who is involved in the study and practice of technology-enhanced learning. It includes examples from informal learning such as MOOCs and citizen science, as well as higher education. Although the foundations of this work are in the UK, its influence has spread worldwide, so it will be of interest internationally

    Predicting Paid Certification in Massive Open Online Courses

    Get PDF
    Massive open online courses (MOOCs) have been proliferating because of the free or low-cost offering of content for learners, attracting the attention of many stakeholders across the entire educational landscape. Since 2012, coined as “the Year of the MOOCs”, several platforms have gathered millions of learners in just a decade. Nevertheless, the certification rate of both free and paid courses has been low, and only about 4.5–13% and 1–3%, respectively, of the total number of enrolled learners obtain a certificate at the end of their courses. Still, most research concentrates on completion, ignoring the certification problem, and especially its financial aspects. Thus, the research described in the present thesis aimed to investigate paid certification in MOOCs, for the first time, in a comprehensive way, and as early as the first week of the course, by exploring its various levels. First, the latent correlation between learner activities and their paid certification decisions was examined by (1) statistically comparing the activities of non-paying learners with course purchasers and (2) predicting paid certification using different machine learning (ML) techniques. Our temporal (weekly) analysis showed statistical significance at various levels when comparing the activities of non-paying learners with those of the certificate purchasers across the five courses analysed. Furthermore, we used the learner’s activities (number of step accesses, attempts, correct and wrong answers, and time spent on learning steps) to build our paid certification predictor, which achieved promising balanced accuracies (BAs), ranging from 0.77 to 0.95. Having employed simple predictions based on a few clickstream variables, we then analysed more in-depth what other information can be extracted from MOOC interaction (namely discussion forums) for paid certification prediction. However, to better explore the learners’ discussion forums, we built, as an original contribution, MOOCSent, a cross- platform review-based sentiment classifier, using over 1.2 million MOOC sentiment-labelled reviews. MOOCSent addresses various limitations of the current sentiment classifiers including (1) using one single source of data (previous literature on sentiment classification in MOOCs was based on single platforms only, and hence less generalisable, with relatively low number of instances compared to our obtained dataset;) (2) lower model outputs, where most of the current models are based on 2-polar iii iv classifier (positive or negative only); (3) disregarding important sentiment indicators, such as emojis and emoticons, during text embedding; and (4) reporting average performance metrics only, preventing the evaluation of model performance at the level of class (sentiment). Finally, and with the help of MOOCSent, we used the learners’ discussion forums to predict paid certification after annotating learners’ comments and replies with the sentiment using MOOCSent. This multi-input model contains raw data (learner textual inputs), sentiment classification generated by MOOCSent, computed features (number of likes received for each textual input), and several features extracted from the texts (character counts, word counts, and part of speech (POS) tags for each textual instance). This experiment adopted various deep predictive approaches – specifically that allow multi-input architecture - to early (i.e., weekly) investigate if data obtained from MOOC learners’ interaction in discussion forums can predict learners’ purchase decisions (certification). Considering the staggeringly low rate of paid certification in MOOCs, this present thesis contributes to the knowledge and field of MOOC learner analytics with predicting paid certification, for the first time, at such a comprehensive (with data from over 200 thousand learners from 5 different discipline courses), actionable (analysing learners decision from the first week of the course) and longitudinal (with 23 runs from 2013 to 2017) scale. The present thesis contributes with (1) investigating various conventional and deep ML approaches for predicting paid certification in MOOCs using learner clickstreams (Chapter 5) and course discussion forums (Chapter 7), (2) building the largest MOOC sentiment classifier (MOOCSent) based on learners’ reviews of the courses from the leading MOOC platforms, namely Coursera, FutureLearn and Udemy, and handles emojis and emoticons using dedicated lexicons that contain over three thousand corresponding explanatory words/phrases, (3) proposing and developing, for the first time, multi-input model for predicting certification based on the data from discussion forums which synchronously processes the textual (comments and replies) and numerical (number of likes posted and received, sentiments) data from the forums, adapting the suitable classifier for each type of data as explained in detail in Chapter 7

    The Big Five:Addressing Recurrent Multimodal Learning Data Challenges

    Get PDF
    The analysis of multimodal data in learning is a growing field of research, which has led to the development of different analytics solutions. However, there is no standardised approach to handle multimodal data. In this paper, we describe and outline a solution for five recurrent challenges in the analysis of multimodal data: the data collection, storing, annotation, processing and exploitation. For each of these challenges, we envision possible solutions. The prototypes for some of the proposed solutions will be discussed during the Multimodal Challenge of the fourth Learning Analytics & Knowledge Hackathon, a two-day hands-on workshop in which the authors will open up the prototypes for trials, validation and feedback

    Multimodal Challenge: Analytics Beyond User-computer Interaction Data

    Get PDF
    This contribution describes one the challenges explored in the Fourth LAK Hackathon. This challenge aims at shifting the focus from learning situations which can be easily traced through user-computer interactions data and concentrate more on user-world interactions events, typical of co-located and practice-based learning experiences. This mission, pursued by the multimodal learning analytics (MMLA) community, seeks to bridge the gap between digital and physical learning spaces. The “multimodal” approach consists in combining learners’ motoric actions with physiological responses and data about the learning contexts. These data can be collected through multiple wearable sensors and Internet of Things (IoT) devices. This Hackathon table will confront with three main challenges arising from the analysis and valorisation of multimodal datasets: 1) the data collection and storing, 2) the data annotation, 3) the data processing and exploitation. Some research questions which will be considered in this Hackathon challenge are the following: how to process the raw sensor data streams and extract relevant features? which data mining and machine learning techniques can be applied? how can we compare two action recordings? How to combine sensor data with Experience API (xAPI)? what are meaningful visualisations for these data
    • …
    corecore