Data quality is crucial for training accurate, unbiased, and trustworthy
machine learning models and their correct evaluation. Recent works, however,
have shown that even popular datasets used to train and evaluate
state-of-the-art models contain a non-negligible amount of erroneous
annotations, bias or annotation artifacts. There exist best practices and
guidelines regarding annotation projects. But to the best of our knowledge, no
large-scale analysis has been performed as of yet on how quality management is
actually conducted when creating natural language datasets and whether these
recommendations are followed. Therefore, we first survey and summarize
recommended quality management practices for dataset creation as described in
the literature and provide suggestions on how to apply them. Then, we compile a
corpus of 591 scientific publications introducing text datasets and annotate it
for quality-related aspects, such as annotator management, agreement,
adjudication or data validation. Using these annotations, we then analyze how
quality management is conducted in practice. We find that a majority of the
annotated publications apply good or very good quality management. However, we
deem the effort of 30% of the works as only subpar. Our analysis also shows
common errors, especially with using inter-annotator agreement and computing
annotation error rates