40,008 research outputs found

    Massive open online course completion rates revisited: Assessment, length and attrition

    Get PDF
    This analysis is based upon enrolment and completion data collected for a total of 221 Massive Open Online Courses (MOOCs). It extends previously reported work (Jordan, 2014) with an expanded dataset; the original work is extended to include a multiple regression analysis of factors that affect completion rates and analysis of attrition rates during courses. Completion rates (defined as the percentage of enrolled students who completed the course) vary from 0.7% to 52.1%, with a median value of 12.6%. Since their inception, enrolments on MOOCs have fallen while completion rates have increased. Completion rates vary significantly according to course length (longer courses having lower completion rates), start date (more recent courses having higher percentage completion) and assessment type (courses using auto grading only having higher completion rates). For a sub-sample of courses where rates of active use and assessment submission across the course are available, the first and second weeks appear to be critical in achieving student engagement, after which the proportion of active students and those submitting assessments levels out, with less than 3% difference between them

    Together we stand, Together we fall, Together we win: Dynamic Team Formation in Massive Open Online Courses

    Full text link
    Massive Open Online Courses (MOOCs) offer a new scalable paradigm for e-learning by providing students with global exposure and opportunities for connecting and interacting with millions of people all around the world. Very often, students work as teams to effectively accomplish course related tasks. However, due to lack of face to face interaction, it becomes difficult for MOOC students to collaborate. Additionally, the instructor also faces challenges in manually organizing students into teams because students flock to these MOOCs in huge numbers. Thus, the proposed research is aimed at developing a robust methodology for dynamic team formation in MOOCs, the theoretical framework for which is grounded at the confluence of organizational team theory, social network analysis and machine learning. A prerequisite for such an undertaking is that we understand the fact that, each and every informal tie established among students offers the opportunities to influence and be influenced. Therefore, we aim to extract value from the inherent connectedness of students in the MOOC. These connections carry with them radical implications for the way students understand each other in the networked learning community. Our approach will enable course instructors to automatically group students in teams that have fairly balanced social connections with their peers, well defined in terms of appropriately selected qualitative and quantitative network metrics.Comment: In Proceedings of 5th IEEE International Conference on Application of Digital Information & Web Technologies (ICADIWT), India, February 2014 (6 pages, 3 figures

    Pedagogical strategies and technologies for peer assessment in Massively Open Online Courses (MOOCs)

    Get PDF
    Peer assessment has been mooted as an effective strategy for scaling­-up higher education and its core values to the proportions envisaged in the idea of Massively Open Online Courses (MOOCs). If this is to become reality, what role will academic technologies play? What technologies will we need to provide? What learning design strategies and patterns will those technologies need to enable? This paper aims to explore the potential role of peer assessment in MOOCs, so as to get an informed sense of technology requirements. However, as will be seen, three of the four elements in the title “pedagogical strategies and technologies for peer assessment in MOOCs” vary radically for both practical and philosophical reasons, with significant implications for technology requirements. Worse still, the picture is evolving in non­linear relation to new technologies and MOOC initiatives. An overview of the various trends and differences is useful, but not conclusive. At points in the text learning design strategies, patterns and technologies are mentioned as possible ways in which peer assessment in MOOCs of various kinds might be implemented. These cases are highlighted in bold so as to stand out. They are also, in some cases, developed into simple design patterns, described in Appendix A. It should be noted, however, that they should be read within the wider pedagogical contexts within which they appear in the main body of the text
    corecore