Assessing the impact of large language models on the scalability and efficiency of automated feedback mechanisms in massive open online courses

Abstract

Click on the DOI link to access this article at the publishers website (may not be free).The rapid proliferation of Massive Open Online Courses (MOOCs) offers particular difficulties in providing timely and high-quality personalized feedbacks associated with customer interactions at scale. This research examines the gap which Large Language Models (LLMs) address with focus on automation in providing timely feedback and the scalability efficiencies of LLMs in the feedback scope provided in MOOC settings. Adopting a results-oriented experimental approach to feedback systems, LLMs like GPT-3.5 and GPT-4 are implemented across varying course contexts and learning groups. Their outputs are benchmarked against traditional systems through semantic similarity calculations, response time measurement, cost evaluation, and learner satisfaction metrics. LLMs’ ability to comply with instructor feedback while improving responsiveness and personalization outpaced traditional methods in every context analyzed, with satisfaction scores outperforming pre-set benchmarks across the board. Learners reported appreciation towards AI responses, citing enhanced understanding and interaction, overshadowed by defendable claims of bias, genericity, and flawed constituent pressure. All in all, the study provides concrete guidance illustrating the ways in which LLMs reconfigure pedagogical feedback mechanisms alongside MOOCs, shaping subsequent shifts in the design and integration strategies utilized in e-learning frameworks across the world. © The Research Publication

Similar works

Full text

thumbnail-image

SOAR: Shocker Open Access Repository (Wichita State Univ.)

redirect
Last time updated on 16/12/2025

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.