Understanding the ‘Black-Box’ of Automated Analysis of Communicative Goals and Rhetorical Strategies in Academic Discourse

Abstract

Despite the appeal of automated writing evaluation (AWE) tools, many writing scholars and teachers have disagreed with the way such tools represent writing as a construct. This talk will address two important objections – that AWE heavily subordinates rhetorical aspects of writing, and that the models used to automatically analyze student texts are not interpretable for the stakeholders vested in the teaching and learning of writing. The purpose is to promote a discussion of how to advance research methods in order to optimize and make more transparent writing analytics for automated rhetorical feedback. AWE models will likely never be capable of truly understanding texts; however, important rhetorical traits of writing can be automatically detected (Cotos & Pendar, 2016). To date, AWE performance has been evaluated in purely quantitative ways that are not meaningful to the writing community. Therefore, it is important to complement quantitative measures with approaches stemming from a humanistic inquiry that would dissect the actual computational model output in order to shed light on the reasons why the ‘black box’ may yield unsatisfactory results

    Similar works