Toward Better Training in Peer Assessment: Does Calibration Help?

Abstract

For peer assessments to be helpful, student reviewers need to submit reviews of good quality. This requires certain training or guidance from teaching staff, lest reviewers read each other\u27s work uncritically, and assign good scores but offer few suggestions. One approach to improving the review quality is calibration. Calibration refers to comparing students\u27 individual reviews to a standard—usually a review done by teaching staff on the same reviewed artifact. In this paper, we categorize two modes of calibration for peer assessment and discuss our experience with both of them in a pilot study with Expertiza system

    Similar works