38 research outputs found

    Quality Frameworks for MOOCs

    Get PDF
    The hype surrounding MOOCs has been tempered by scepticism about the quality of MOOCs. The possible flaws of MOOCs include the quality of the pedagogies employed, low completion rates and a failure to deliver on the promise of inclusive and equitable quality education for all. On the other hand, MOOCs that have given a boost to open and online education have become a symbol of a larger modernisation agenda for universities, and are perceived as tools for universities to improve the quality of blended and online education—both in degree education and Continuous Professional Development. MOOC provision is also much more open to external scrutiny as part of a stronger globalising higher education market. This has important consequences for quality frameworks and quality processes that go beyond the individual MOOC. In this context, different quality approaches are discussed including possible measures at different levels and the tension between product and process models. Two case studies are described: one at the institutional level (The Open University) and one at a MOOC platform level (FutureLearn) and how they intertwine is discussed. The importance of a national or international quality framework which carries with it a certification or label is illustrated with the OpenupEd Quality label. Both the label itself and its practical use are described in detail. The examples will illustrate that MOOCs require quality assurance processes tailored to e-learning and open education, embedded in institutional frameworks. The increasing unbundling of educational services may require additional quality processes

    Crowdsourcing the identification of organisms: a case-study of iSpot

    Get PDF
    Accurate species identification is fundamental to biodiversity science, but the natural history skills required for this are neglected in formal education at all levels. In this paper we describe how the web application ispotnature.org and its sister site ispot.org.za (collectively, “iSpot”) are helping to solve this problem by combining learning technology with crowdsourcing to connect beginners with experts. Over 94% of observations submitted to iSpot receive a determination. External checking of a sample of 3,287 iSpot records verified > 92% of them. To mid 2014, iSpot crowdsourced the identification of 30,000 taxa (>80% at species level) in > 390,000 observations with a global community numbering > 42,000 registered participants. More than half the observations on ispotnature.org were named within an hour of submission. iSpot uses a unique, 9-dimensional reputation system to motivate and reward participants and to verify determinations. Taxon-specific reputation points are earned when a participant proposes an identification that achieves agreement from other participants, weighted by the agreers’ own reputation scores for the taxon. This system is able to discriminate effectively between competing determinations when two or more are proposed for the same observation. In 57% of such cases the reputation system improved the accuracy of the determination, while in the remainder it either improved precision (e.g. by adding a species name to a genus) or revealed false precision, for example where a determination to species level was not supported by the available evidence. We propose that the success of iSpot arises from the structure of its social network that efficiently connects beginners and experts, overcoming the social as well as geographic barriers that normally separate the two
    corecore