4,133 research outputs found
Recommended from our members
Innovating for Learning: Designing for the Future of Education
Teaching has moved online as the world has moved online and learning is losing its sense of physical location with the availability of many different options from mobile to MOOC (Massive Open Online Course). The impact of online learning is not confined to distance learning; when a student attends a campus university they are now as likely to meet with their fellow learners virtually as face to face. The education sector has yet to fully adapt to what this means, and indeed there strong signs of a built in resilience from providers, employers and students themselves which may mean an apparent evolution is more likely than a revolution. At the same time, there are some quiet changes underway that mean we should be preparing to innovate for the revolution to come. Some of those changes are considered in work undertaken at The Open University that has been disseminated in a series of Innovating Pedagogy reports. These reports allow the academic authors to be more speculative than is usual practice and engage in considering the future, while remaining based on a view of what is happening in the sector. In particular they adopt a position focused on pedagogy that balances technology-based futurology that can dominate yet fail to resonate with those actually involved in the teaching process. The annual Innovating Pedagogy reports cover 10 topics each, with some deliberate overlap from year to year and development of themes that show innovations moving into teaching practice. This is illustrated by two cases, the impact of MOOCs and the application of learning design and analytics. The development of MOOCs demonstrates the value of reviewing pedagogy that aligns with technology. While the use of learning design and learning analytics demonstrates how improvements in the way we describe our learning processes and the way we understand learner behaviour is helping determine how choices in pedagogy impact on student satisfaction, progression and success
Disciplinary and Didactic Profiles in EduOpen Network MOOCs
This paper describes the quantitative and qualitative characteristics of the massive open online courses (MOOCs) available in the EduOpen platform. In particular, data (analytics) concerning the variables didactic disciplines and didactic structuring are presented to identify main trend lines and potential critical aspects. Useful elements emerge to enhance our understanding of the main characteristics of the MOOCs offered by the EduOpen network, in particular: a) the quantitative dimensions of MOOC supply and demand, in which a greater flow of enrolment towards courses of a scientific and technological nature is evident; b) the degree of didactic structuring of the courses, where the presence of assessment tools appears to be the element that especially characterises the didactic structure of the EduOpen MOOCs. The conclusions suggest awareness-raising actions to build dashboards that can report to instructors and students in real time the critical and necessary action issues and therefore provide useful guidance both to prevent risky situations and to support teachers in the design and development of new courses
Recommended from our members
Quality in MOOCs: Surveying the Terrain
The purpose of this review is to identify quality measures and to highlight some of the tensions surrounding notions of quality, as well as the need for new ways of thinking about and approaching quality in MOOCs. It draws on the literature on both MOOCs and quality in education more generally in order to provide a framework for thinking about quality and the different variables and questions that must be considered when conceptualising quality in MOOCs. The review adopts a relativist approach, positioning quality as a measure for a specific purpose. The review draws upon Biggs’s (1993) 3P model to explore notions and dimensions of quality in relation to MOOCs — presage, process and product variables — which correspond to an input–environment–output model. The review brings together literature examining how quality should be interpreted and assessed in MOOCs at a more general and theoretical level, as well as empirical research studies that explore how these ideas about quality can be operationalised, including the measures and instruments that can be employed. What emerges from the literature are the complexities involved in interpreting and measuring quality in MOOCs and the importance of both context and perspective to discussions of quality
Recommended from our members
Learning in MOOCs: The [Un]democratisation of Learning
Massive open online courses have been signaled as a disruptive and democratizing force in online, distance education. This position paper critiques these claims, examining the tensions between viewing MOOCs as products and students as customers, and the perspective of students as learners who may, or might not, be able to determine their own learning pathway. The capacity, or non-ability, to self-regulate learning leads to inequalities in the ways learners experience MOOCs. While some MOOCs have contributed to change, many replicate and reinforce education that privilege the elite. This paper argues a need to support the development of digital skills and core competencies, including the ability to self-regulate learning, to ensure learners can participate in a new democracy of open, online learning
A review on massive e-learning (MOOC) design, delivery and assessment
MOOCs or Massive Online Open Courses based on Open Educational Resources (OER) might be one of the most versatile ways to offer access to quality education, especially for those residing in far or disadvantaged areas. This article analyzes the state of the art on MOOCs, exploring open research questions and setting interesting topics and goals for further research. Finally, it proposes a framework that includes the use of software agents with the aim to improve and personalize management, delivery, efficiency and evaluation of massive online courses on an individual level basis.Peer ReviewedPostprint (author's final draft
EU–originated MOOCs, with focus on multi- and single-institution platforms
No abstract available
Dropout Model Evaluation in MOOCs
The field of learning analytics needs to adopt a more rigorous approach for
predictive model evaluation that matches the complex practice of
model-building. In this work, we present a procedure to statistically test
hypotheses about model performance which goes beyond the state-of-the-practice
in the community to analyze both algorithms and feature extraction methods from
raw data. We apply this method to a series of algorithms and feature sets
derived from a large sample of Massive Open Online Courses (MOOCs). While a
complete comparison of all potential modeling approaches is beyond the scope of
this paper, we show that this approach reveals a large gap in dropout
prediction performance between forum-, assignment-, and clickstream-based
feature extraction methods, where the latter is significantly better than the
former two, which are in turn indistinguishable from one another. This work has
methodological implications for evaluating predictive or AI-based models of
student success, and practical implications for the design and targeting of
at-risk student models and interventions
- …