Affordances and limitations of algorithmic criticism

Abstract

Humanities scholars currently have access to unprecedented quantities of machine-readable texts, and, at the same time, the tools and the methods with which we can analyse and visualise these texts are becoming more and more sophisticated. As has been shown in numerous studies, many of the new technical possibilities that emerge from fields such as text mining and natural language processing can have useful applications within literary research. Computational methods can help literary scholars to discover interesting trends and correlations within massive text collections, and they can enable a thoroughly systematic examination of the stylistic properties of literary works. While such computer-assisted forms of reading have proven invaluable for research in the field of literary history, relatively few studies have applied these technologies to expand or to transform the ways in which we can interpret literary texts. Based on a comparative analysis of digital scholarship and traditional scholarship, this thesis critically examines the possibilities and the limitations of a computer-based literary criticism. It argues that quantitative analyses of data about literary techniques can often reveal surprising qualities of works of literature, which can, in turn, lead to new interpretative readings

    Similar works