Systems for Maximizing Student Learning, Engagement, and Academic Achievement

Abstract

The rapid expansion of computer science education has placed significant strain on educators and students alike, particularly in large introductory programming courses. Traditional assessment methods often fail to balance timely feedback, effective student engagement, and scalable instructor support. In response, this dissertation presents TA-Bot, a novel automated assessment tool designed to incentivize early engagement, improve code quality, and encourage office-hour participation through an adaptive, non-punitive reward system. TA-Bot integrates a Time Between Submissions mechanism, dynamically modulating feedback frequency to discourage trial-and-error programming while fostering thoughtful code development. The system also implements gamification principles to motivate students to start assignments earlier and interact with course support structures. By shifting away from punitive restrictions and invasive data tracking, this research explores how positive reinforcement strategies can enhance student learning behaviors without discouraging participation. A longitudinal study was conducted across multiple semesters to evaluate the effectiveness of TA-Bot. The findings indicate that students using the system demonstrated higher engagement levels, improved code quality, and greater office-hour attendance, leading to better overall retention and performance. This work contributes to the broader discourse on CS education by demonstrating the efficacy of behavioral nudges and incentive-driven assessment tools in fostering productive learning habits

Similar works

Full text

thumbnail-image

epublications@Marquette

redirect
Last time updated on 04/11/2025

This paper was published in epublications@Marquette.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.