Skip to main content
Article thumbnail
Location of Repository

Dont write, just mark: the validity of assessing student ability via their computerized peer-marking of an essay rather than their creation of an essay

By Phil Davies

Abstract

This paper reports on a case study that evaluates the validity of assessing students via a computerized peer-marking process, rather than on their production of an essay in a particular subject area. The study assesses the higher-order skills shown by a student in marking and providing consistent feedback on an essay. In order to evaluate the suitability of this method of assessment in judging a students ability, their results in performing this peer-marking process are correlated against their results in a number of computerized multiple-choice exercises and also the production of an essay in a cognate area of the subject being undertaken. The results overall show a correlation of the expected results in all three areas of assessment being undertaken, rated by the final grades of the students undertaking the assessment. The results produced by quantifying the quality of the marking and commenting of the students is found to map well to the overall expectations of the results produced for the cohort of students. It is also shown that the higher performing students achieve a greater improvement in their overall marks by performing the marking process than those students of a lower quality. This appears to support previous claims that awarding a 'mark for marking' rewards the demonstration of higher order skills of assessment. Finally, note is made of the impact that such an assessment method can have upon eradicating the possibility of plagiarism

Topics: LB Theory and practice of education, LC1022 - 1022.25 Computer-assisted Education
Publisher: Taylor and Francis Ltd
Year: 2004
DOI identifier: 10.1080/0968776042000259573
OAI identifier: oai:generic.eprints.org:611/core5

Suggested articles

Citations

  1. (2003a) Peer-assessment: no marks required just feedback?—evaluating the quality of computerized peer-feedback compared with computerized peer-marking, in:
  2. (1956). A taxonomy of educational objectives: handbook of cognitive domain doi
  3. (2003). Communities of Practice, Research
  4. (2000). Computerized peer assessment, doi
  5. (2003). Deepening computer programming skills by using Web-based peer assessment, doi
  6. (2001). Feedback for web-based assignments, doi
  7. (1995). Improving feedback to and from students, in: P. Knight (Ed.) Assessment for learning in higher education
  8. (2002). In search of fairness: an application of multi-reviewer anonymous peer review in a large class, doi
  9. (2002). Patterns in student–student commenting, doi
  10. (2001). Peer assessment: principles, a case, and computer support, paper presented at the LTSN-ICS workshop in Computer Assisted Assessment,
  11. (1995). Peer feedback marking: developing peer assessment, doi
  12. (1999). Peer learning and assessment, doi
  13. (1994). Peer, self and tutor assessment: relative reliabilities, doi
  14. (2003). Self, peer and tutor assessment of text online: design, delivery and analysis,
  15. (2000). Student peer assessment in higher education: a meta-analysis comparing peer and teacher marks, doi
  16. (1995). The art of assessing,
  17. (2004). The automated peer-assisted assessment of programming skills, in: P. Chalk (Ed.) doi
  18. (2002). There’s no confidence in multiple choice testing, in: M. Danson & C. Eabry (Eds)
  19. (2001). Towards electronically assisted peer assessment: a case study, doi
  20. (2002). Using student reflective self-assessment for awarding degree classifications, doi
  21. (2001). Web-based peer assessment: feedback for students with various thinking-styles, doi

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.