Economic behavior of information acquisition: Impact of peer grading in MOOCs

Abstract

A critical issue in operating massive open online courses (MOOCs) is the scalability of providing feedback. Because it is not feasible for instructors to grade a large number of students’ assignments, MOOCs use peer grading systems. This study investigates the efficacy of that practice when student graders are rational economic agents. We characterize grading as a process of (a) acquiring information to assess an assignment’s quality and (b) reporting a score. This process entails a tradeoff between the cost of acquiring information and the benefits of accurate grading. Because the true quality is not observable, any measure of inaccuracy must reference the actions of other graders, which motivates student graders to behave strategically. We present the unique equilibrium information level and reporting strategy of a homogeneous group of student graders and then examine the outcome of peer grading. We show how both the peer grading structure and the nature of MOOC courses affect peer grading accuracy, and we identify conditions under which the process fails. There is a systematic grading bias toward the mean, which discourages students from learning. To improve current practice, we introduce a scale-shift grading scheme, theoretically examine how it can improve grading accuracy and adjust grading bias and discuss how it can be practically implemented

    Similar works