3 research outputs found
Recommended from our members
Bob or Bot: Exploring ChatGPT’s answers to University Computer Science Assessment
Cheating has been a problem long standing issue in University assessments. However, the rise of ChatGPT and other free-to-use generative AI tools have democratised cheating. Students can run any assessment questions through the tool, and generate a superficially compelling solution, which may or may not be accurate. We ran a blinded “quality assurance” marking exercise, providing ChatGPT-generated “synthetic” scripts alongside student scripts to volunteer markers. 4 end-of-module assessments from across a University CS curriculum were anonymously marked. A total of 90 scripts were marked, and barring two outliers, every undergraduate script received at least a passing grade. We also present the results of running our sample scripts through diverse quality assurance software, and the results of interviewing the markers. As such, we contribute a baseline understanding of how the public release of generative AI is potentially going to significantly impact quality assurance processes as our analysis demonstrates that, in most cases, across a range of question formats, topics, and study levels, ChatGPT is at least capable of producing adequate solutions
University Students’ Ability to Interpret Visual Representations in Plant Anatomy
This case study investigated first year university students’ ability to interpret visual representations—micrographs—in plant anatomy following instruction. Quantitative and qualitative data were collected through diagnostic tests, lesson observations, interviews, and document reviews. The findings revealed that students’ difficulties when interpreting plant micrographs was due to insufficient conceptual understanding and inadequate skills in reasoning and identification. A greater focus on these skills should support students’ ability to interpret visual representations in plant anatomy
Student interviews as a tool for assessment and learning in a systems analysis and design course
This paper examines the use of student interviews as a means of assessment for systems analysis and design assignments and as a means of providing feedback to students on their performance in the assignment. It uses student feedback from 510 student surveys gathered from Semester 1 2001 to Semester 2 2003 to assess the opinion of students on the use of interviews and describes the lessons learnt about this form of assessment