The impact of generative artificial intelligence (GenAI) on higher education has been widely discussed since the public release of ChatGPT-3.5 in late 2022. However, there has been little empirical research on changes in English-for-Academic-Purposes (EAP) assessment practices in response to GenAI. This qualitative case study intends to fill this gap by examining how Scottish universities changed EAP assessments in response to GenAI, how effective those changes were perceived by EAP academics, and what recommendations EAP academics offered for future assessment practices. Data were collected from six semi-structured interviews conducted with EAP academics at five Scottish universities in mid-2024 and thematically analysed. The findings reveal that while substantial changes in assessment task design were limited, modifications to task requirements (e.g., GenAI declarations, context-specific prompts) and grading practices were more common. Moreover, our participants expressed scepticism about the effectiveness of some changes (e.g., AI use declarations) but positively perceived others (e.g., the use of context-specific questions, spontaneous speaking tasks, and named marking). As for their recommendations, the participating EAP academics generally advocated authentic and innovative tasks, such as portfolio-based assessment, reflections, multimodal projects, and GenAI output evaluation over reverting to traditional exams while simultaneously highlighting issues with workload and learning outcomes. The study implies a need for clearer institutional guidance, ongoing professional dialogue, and support for experimentation with GenAI-integrated assessment design in EAP contexts
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.