709 research outputs found

    Methods for Ordinal Peer Grading

    Full text link
    MOOCs have the potential to revolutionize higher education with their wide outreach and accessibility, but they require instructors to come up with scalable alternates to traditional student evaluation. Peer grading -- having students assess each other -- is a promising approach to tackling the problem of evaluation at scale, since the number of "graders" naturally scales with the number of students. However, students are not trained in grading, which means that one cannot expect the same level of grading skills as in traditional settings. Drawing on broad evidence that ordinal feedback is easier to provide and more reliable than cardinal feedback, it is therefore desirable to allow peer graders to make ordinal statements (e.g. "project X is better than project Y") and not require them to make cardinal statements (e.g. "project X is a B-"). Thus, in this paper we study the problem of automatically inferring student grades from ordinal peer feedback, as opposed to existing methods that require cardinal peer feedback. We formulate the ordinal peer grading problem as a type of rank aggregation problem, and explore several probabilistic models under which to estimate student grades and grader reliability. We study the applicability of these methods using peer grading data collected from a real class -- with instructor and TA grades as a baseline -- and demonstrate the efficacy of ordinal feedback techniques in comparison to existing cardinal peer grading methods. Finally, we compare these peer-grading techniques to traditional evaluation techniques.Comment: Submitted to KDD 201

    MOOCs Meet Measurement Theory: A Topic-Modelling Approach

    Full text link
    This paper adapts topic models to the psychometric testing of MOOC students based on their online forum postings. Measurement theory from education and psychology provides statistical models for quantifying a person's attainment of intangible attributes such as attitudes, abilities or intelligence. Such models infer latent skill levels by relating them to individuals' observed responses on a series of items such as quiz questions. The set of items can be used to measure a latent skill if individuals' responses on them conform to a Guttman scale. Such well-scaled items differentiate between individuals and inferred levels span the entire range from most basic to the advanced. In practice, education researchers manually devise items (quiz questions) while optimising well-scaled conformance. Due to the costly nature and expert requirements of this process, psychometric testing has found limited use in everyday teaching. We aim to develop usable measurement models for highly-instrumented MOOC delivery platforms, by using participation in automatically-extracted online forum topics as items. The challenge is to formalise the Guttman scale educational constraint and incorporate it into topic models. To favour topics that automatically conform to a Guttman scale, we introduce a novel regularisation into non-negative matrix factorisation-based topic modelling. We demonstrate the suitability of our approach with both quantitative experiments on three Coursera MOOCs, and with a qualitative survey of topic interpretability on two MOOCs by domain expert interviews.Comment: 12 pages, 9 figures; accepted into AAAI'201

    Generating actionable predictions regarding MOOC learners’ engagement in peer reviews

    Get PDF
    Producción CientíficaPeer review is one approach to facilitate formative feedback exchange in MOOCs; however, it is often undermined by low participation. To support effective implementation of peer reviews in MOOCs, this research work proposes several predictive models to accurately classify learners according to their expected engagement levels in an upcoming peer-review activity, which offers various pedagogical utilities (e.g. improving peer reviews and collaborative learning activities). Two approaches were used for training the models: in situ learning (in which an engagement indicator available at the time of the predictions is used as a proxy label to train a model within the same course) and transfer across courses (in which a model is trained using labels obtained from past course data). These techniques allowed producing predictions that are actionable by the instructor while the course still continues, which is not possible with post-hoc approaches requiring the use of true labels. According to the results, both transfer across courses and in situ learning approaches have produced predictions that were actionable yet as accurate as those obtained with cross validation, suggesting that they deserve further attention to create impact in MOOCs with real-world interventions. Potential pedagogical uses of the predictions were illustrated with several examples.European Union’s Horizon 2020 research and innovation programme (Marie Sklodowska-Curie grant 793317)Ministerio de Ciencia, Innovación y Universidades (projects TIN2017-85179-C3-2-R / TIN2014-53199-C3-2-R)Junta de Castilla y León (grant VA257P18)Comisión Europea (grant 588438-EPP-1-2017-1-EL-EPPKA2-KA
    • …
    corecore