Peer assessment without assessment criteria

Abstract

Peer assessment typically requires students to judge peers' work against assessment criteria. We tested an alternative approach in which students judged pairs of scripts against one another in the absence of assessment criteria. First year mathematics undergraduates (N = 194) sat a written test on conceptual understanding of multivariable calculus, then assessed their peers' responses using pairwise comparative judgement. Inter-rater reliability was investigated by randomly assigning the students to two groups and correlating the two groups' assessments. Validity was investigated by correlating the peers' assessments with (i) expert assessments, (ii) novice assessments, and (iii) marks from other module tests. We found high validity and inter-rater reliability, suggesting that the students performed well as peer assessors. We interpret the results in the light of survey and interview feedback, and discuss directions for further research into the benefits and drawbacks of peer assessment without assessment criteria

    Similar works