Gender biases in language generation systems are challenging to mitigate. One
possible source for these biases is gender representation disparities in the
training and evaluation data. Despite recent progress in documenting this
problem and many attempts at mitigating it, we still lack shared methodology
and tooling to report gender representation in large datasets. Such
quantitative reporting will enable further mitigation, e.g., via data
augmentation. This paper describes the Gender-GAP Pipeline (for Gender-Aware
Polyglot Pipeline), an automatic pipeline to characterize gender representation
in large-scale datasets for 55 languages. The pipeline uses a multilingual
lexicon of gendered person-nouns to quantify the gender representation in text.
We showcase it to report gender representation in WMT training data and
development data for the News task, confirming that current data is skewed
towards masculine representation. Having unbalanced datasets may indirectly
optimize our systems towards outperforming one gender over the others. We
suggest introducing our gender quantification pipeline in current datasets and,
ideally, modifying them toward a balanced representation.Comment: 15 page