1 research outputs found
Recommended from our members
Bob or Bot: Exploring ChatGPT’s answers to University Computer Science Assessment
Cheating has been a problem long standing issue in University assessments. However, the rise of ChatGPT and other free-to-use generative AI tools have democratised cheating. Students can run any assessment questions through the tool, and generate a superficially compelling solution, which may or may not be accurate. We ran a blinded “quality assurance” marking exercise, providing ChatGPT-generated “synthetic” scripts alongside student scripts to volunteer markers. 4 end-of-module assessments from across a University CS curriculum were anonymously marked. A total of 90 scripts were marked, and barring two outliers, every undergraduate script received at least a passing grade. We also present the results of running our sample scripts through diverse quality assurance software, and the results of interviewing the markers. As such, we contribute a baseline understanding of how the public release of generative AI is potentially going to significantly impact quality assurance processes as our analysis demonstrates that, in most cases, across a range of question formats, topics, and study levels, ChatGPT is at least capable of producing adequate solutions