We describe a failed pilot that involved using the automated grading software Möbius in place of graduate student markers for three undergraduate courses delivered in the School of Mathematics and Physics in Queen’s University Belfast. We analyze the effects of this change on student engagement and performance. Our evidence suggests that students are more likely to engage with formative assessment activities when they are marked with Möbius. Students also perform better in summative assessments when they have had Möbius assignments to complete—with one module having a stark reduction in failure rate from 32% to 5%. When we surveyed the students who had the opportunity to engage with Möbius, we did not find that they had much enthusiasm for the software. However, we found that students also lacked enthusiasm for the systems for assessment and feedback that Möbius had replaced. Their responses to our survey instead indicating that students may not fully understand the distinction between formative and summative assessment. As we discuss in the conclusion, this project failed because, in spite of this apparent success, we could not drum up the support for Möbius from students and colleagues that justified the expense associated with purchasing software licenses each year. To introduce automated grading in our context we need a system that has zero or negligible associated cost as it will likely only ever be used by a small number of staff.<br/
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.