Massive Open Online Courses (MOOCs) use peer assessment to grade open ended
questions at scale, allowing students to provide feedback. Relative to teacher
based grading, peer assessment on MOOCs traditionally delivers lower quality
feedback and fewer learner interactions. We present the identified peer review
(IPR) framework, which provides non-blind peer assessment and incentives
driving high quality feedback. We show that, compared to traditional peer
assessment methods, IPR leads to significantly longer and more useful feedback
as well as more discussion between peers.Comment: To apear at Learning@Scale 201