It is an open question to what extent perceptions of literary quality are
derived from text-intrinsic versus social factors. While supervised models can
predict literary quality ratings from textual factors quite successfully, as
shown in the Riddle of Literary Quality project (Koolen et al., 2020), this
does not prove that social factors are not important, nor can we assume that
readers make judgments on literary quality in the same way and based on the
same information as machine learning models. We report the results of a pilot
study to gauge the effect of textual features on literary ratings of
Dutch-language novels by participants in a controlled experiment with 48
participants. In an exploratory analysis, we compare the ratings to those from
the large reader survey of the Riddle in which social factors were not
excluded, and to machine learning predictions of those literary ratings. We
find moderate to strong correlations of questionnaire ratings with the survey
ratings, but the predictions are closer to the survey ratings. Code and data:
https://github.com/andreasvc/litquestComment: Accepted for LaTeCH 2020 @ COLIN