Sample size calculation is an essential step in most data-based disciplines.
Large enough samples ensure representativeness of the population and determine
the precision of estimates. This is true for most quantitative studies,
including those that employ machine learning methods, such as natural language
processing, where free-text is used to generate predictions and classify
instances of text. Within the healthcare domain, the lack of sufficient corpora
of previously collected data can be a limiting factor when determining sample
sizes for new studies. This paper tries to address the issue by making
recommendations on sample sizes for text classification tasks in the healthcare
domain.
Models trained on the MIMIC-III database of critical care records from Beth
Israel Deaconess Medical Center were used to classify documents as having or
not having Unspecified Essential Hypertension, the most common diagnosis code
in the database. Simulations were performed using various classifiers on
different sample sizes and class proportions. This was repeated for a
comparatively less common diagnosis code within the database of diabetes
mellitus without mention of complication.
Smaller sample sizes resulted in better results when using a K-nearest
neighbours classifier, whereas larger sample sizes provided better results with
support vector machines and BERT models. Overall, a sample size larger than
1000 was sufficient to provide decent performance metrics.
The simulations conducted within this study provide guidelines that can be
used as recommendations for selecting appropriate sample sizes and class
proportions, and for predicting expected performance, when building classifiers
for textual healthcare data. The methodology used here can be modified for
sample size estimates calculations with other datasets.Comment: Submitted to Journal of Biomedical Informatic