European Association for the Teaching of Academic Writing (EATAW)
Abstract
GenAI has demonstrated functionality that seems, uncannily, to parallel reading and writing by identifying/reformulating information from source texts and generating novel content and argumentation. These skills are essential yet challenging for many students tasked with producing literature reviews. This study takes the first steps to investigating the feasibility of a GenAI-facilitated literature review. This investigation starts from the ‘human-in-the-loop’ position that complex processes can be deconstructed and compartmentalized, and that component functions needed for these processes can be delegated to machines while humans contribute to, or control, the overall process. We explore the hypothesis that certain functions of the literature review process, such as information extraction and content classification, might be able to be automated. Prompts modeled on recommended practices for research synthesis were designed to identify and classify particular types of content in research articles. Outputs produced by two GenAI models, GPT-3.5 and GPT-4o, were assessed for reliability with a human coder. Overall, the results posit concerns about the models’ performance on this task, cautioning against direct uses of GenAI output as learning scaffolding for students developing literature review skills
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.