1,137 research outputs found
Recommended from our members
Quality in MOOCs: Surveying the Terrain
The purpose of this review is to identify quality measures and to highlight some of the tensions surrounding notions of quality, as well as the need for new ways of thinking about and approaching quality in MOOCs. It draws on the literature on both MOOCs and quality in education more generally in order to provide a framework for thinking about quality and the different variables and questions that must be considered when conceptualising quality in MOOCs. The review adopts a relativist approach, positioning quality as a measure for a specific purpose. The review draws upon Biggs’s (1993) 3P model to explore notions and dimensions of quality in relation to MOOCs — presage, process and product variables — which correspond to an input–environment–output model. The review brings together literature examining how quality should be interpreted and assessed in MOOCs at a more general and theoretical level, as well as empirical research studies that explore how these ideas about quality can be operationalised, including the measures and instruments that can be employed. What emerges from the literature are the complexities involved in interpreting and measuring quality in MOOCs and the importance of both context and perspective to discussions of quality
Replication In Massive Open Online Course Research Using The Mooc Replication Framework
The purpose of this dissertation was to develop and use a platform that facilitates Massive Open Online Course (MOOC) replication research. Replication and the verification of previously published findings is an essential step in the scientific process. Unfortunately, a replication crisis has long plagued scientific research, affecting even the field of education. As a result, the validity of more and more published findings is coming into question. Research on MOOCs have not been exempt from this. Due to a number of limiting technical barriers, MOOC literature suffers from such issues as contradictory findings between published works and the unconscious skewing of results caused by overfitting to single datasets. The MOOC Replication Framework (MORF) was developed to allow researchers to bypass these technical barriers. Researchers are able to design their own MOOC analyses and have MORF conduct it for them across its massive store of MOOC data. The first study in this dissertation, which describes the work that went into building the platform that would eventually turn into MORF, conducted a feasibility study that aimed to investigate whether the platform was able to perform the tasks it was built for. This was done through the replication of previously published findings within a single dataset. The second study describes the initial architecture of MORF and sought to demonstrate the platform’s scaled feasibility to conduct large-scale replication research. This was done through the execution of a large-scale replication study against data from an entire University’s roster of MOOCs. Finally, the third study highlighted how MORF’s architecture allows for the execution of more than just replication studies. This was done through the execution of a novel research study that sought to analyze the generalizability of predictive models of completion between the countries present in MORF’s expansive dataset—an important issue to address given the massive enrollment numbers of MOOCs from all around the world
Examining engagement: analysing learner subpopulations in massive open online courses (MOOCs)
Massive open online courses (MOOCs) are now being used across the world to provide millions of learners with access to education. Many learners complete these courses successfully, or to their own satisfaction, but the high numbers who do not finish remain a subject of concern for platform providers and educators. In 2013, a team from Stanford University analysed engagement patterns on three MOOCs run on the Coursera platform. They found four distinct patterns of engagement that emerged from MOOCs based on videos and assessments. However, not all platforms take this approach to learning design. Courses on the FutureLearn platform are underpinned by a social-constructivist pedagogy, which includes discussion as an important element. In this paper, we analyse engagement patterns on four FutureLearn MOOCs and find that only two clusters identified previously apply in this case. Instead, we see seven distinct patterns of engagement: Samplers, Strong Starters, Returners, Mid-way Dropouts, Nearly There, Late Completers and Keen Completers. This suggests that patterns of engagement in these massive learning environments are influenced by decisions about pedagogy. We also make some observations about approaches to clustering in this context
- …