Manual coding of text data from open-ended questions into different
categories is time consuming and expensive. Automated coding uses
statistical/machine learning to train on a small subset of manually coded text
answers. Recently, pre-training a general language model on vast amounts of
unrelated data and then adapting the model to the specific application has
proven effective in natural language processing. Using two data sets, we
empirically investigate whether BERT, the currently dominant pre-trained
language model, is more effective at automated coding of answers to open-ended
questions than other non-pre-trained statistical learning approaches. We found
fine-tuning the pre-trained BERT parameters is essential as otherwise BERT's is
not competitive. Second, we found fine-tuned BERT barely beats the
non-pre-trained statistical learning approaches in terms of classification
accuracy when trained on 100 manually coded observations. However, BERT's
relative advantage increases rapidly when more manually coded observations
(e.g. 200-400) are available for training. We conclude that for automatically
coding answers to open-ended questions BERT is preferable to non-pretrained
models such as support vector machines and boosting