3,556 research outputs found

    Supporting professional learning in a massive open online course

    Get PDF
    Professional learning, combining formal and on the job learning, is important for the development and maintenance of expertise in the modern workplace. To integrate formal and informal learning, professionals have to have good self-regulatory ability. Formal learning opportunities are opening up through massive open online courses (MOOCs), providing free and flexible access to formal education for millions of learners worldwide. MOOCs present a potentially useful mechanism for supporting and enabling professional learning, allowing opportunities to link formal and informal learning. However, there is limited understanding of their effectiveness as professional learning environments. Using self-regulated learning as a theoretical base, this study investigated the learning behaviours of health professionals within Fundamentals of Clinical Trials, a MOOC offered by edX. Thirty-five semi-structured interviews were conducted and analysed to explore how the design of this MOOC supported professional learning to occur. The study highlights a mismatch between learning intentions and learning behaviour of professional learners in this course. While the learners are motivated to participate by specific role challenges, their learning effort is ultimately focused on completing course tasks and assignments. The study found little evidence of professional learners routinely relating the course content to their job role or work tasks, and little impact of the course on practice. This study adds to the overall understanding of learning in MOOCs and provides additional empirical data to a nascent research field. The findings provide an insight into how professional learning could be integrated with formal, online learning

    Understanding Communication Patterns in MOOCs: Combining Data Mining and qualitative methods

    Full text link
    Massive Open Online Courses (MOOCs) offer unprecedented opportunities to learn at scale. Within a few years, the phenomenon of crowd-based learning has gained enormous popularity with millions of learners across the globe participating in courses ranging from Popular Music to Astrophysics. They have captured the imaginations of many, attracting significant media attention - with The New York Times naming 2012 "The Year of the MOOC." For those engaged in learning analytics and educational data mining, MOOCs have provided an exciting opportunity to develop innovative methodologies that harness big data in education.Comment: Preprint of a chapter to appear in "Data Mining and Learning Analytics: Applications in Educational Research

    A review on massive e-learning (MOOC) design, delivery and assessment

    Get PDF
    MOOCs or Massive Online Open Courses based on Open Educational Resources (OER) might be one of the most versatile ways to offer access to quality education, especially for those residing in far or disadvantaged areas. This article analyzes the state of the art on MOOCs, exploring open research questions and setting interesting topics and goals for further research. Finally, it proposes a framework that includes the use of software agents with the aim to improve and personalize management, delivery, efficiency and evaluation of massive online courses on an individual level basis.Peer ReviewedPostprint (author's final draft

    MOOCs Meet Measurement Theory: A Topic-Modelling Approach

    Full text link
    This paper adapts topic models to the psychometric testing of MOOC students based on their online forum postings. Measurement theory from education and psychology provides statistical models for quantifying a person's attainment of intangible attributes such as attitudes, abilities or intelligence. Such models infer latent skill levels by relating them to individuals' observed responses on a series of items such as quiz questions. The set of items can be used to measure a latent skill if individuals' responses on them conform to a Guttman scale. Such well-scaled items differentiate between individuals and inferred levels span the entire range from most basic to the advanced. In practice, education researchers manually devise items (quiz questions) while optimising well-scaled conformance. Due to the costly nature and expert requirements of this process, psychometric testing has found limited use in everyday teaching. We aim to develop usable measurement models for highly-instrumented MOOC delivery platforms, by using participation in automatically-extracted online forum topics as items. The challenge is to formalise the Guttman scale educational constraint and incorporate it into topic models. To favour topics that automatically conform to a Guttman scale, we introduce a novel regularisation into non-negative matrix factorisation-based topic modelling. We demonstrate the suitability of our approach with both quantitative experiments on three Coursera MOOCs, and with a qualitative survey of topic interpretability on two MOOCs by domain expert interviews.Comment: 12 pages, 9 figures; accepted into AAAI'201

    Functional Baby Talk: Analysis of Code Fragments from Novice Haskell Programmers

    Get PDF
    What kinds of mistakes are made by novice Haskell developers, as they learn about functional programming? Is it possible to analyze these errors in order to improve the pedagogy of Haskell? In 2016, we delivered a massive open online course which featured an interactive code evaluation environment. We captured and analyzed 161K interactions from learners. We report typical novice developer behavior; for instance, the mean time spent on an interactive tutorial is around eight minutes. Although our environment was restricted, we gain some understanding of Haskell novice errors. Parenthesis mismatches, lexical scoping errors and do block misunderstandings are common. Finally, we make recommendations about how such beginner code evaluation environments might be enhanced
    corecore