While many languages possess processes of joining two or more words to create
compound words, previous studies have been typically limited only to languages
with excessively productive compound formation (e.g., German, Dutch) and there
is no public dataset containing compound and non-compound words across a large
number of languages. In this work, we systematically study decompounding, the
task of splitting compound words into their constituents, at a wide scale. We
first address the data gap by introducing a dataset of 255k compound and
non-compound words across 56 diverse languages obtained from Wiktionary. We
then use this dataset to evaluate an array of Large Language Models (LLMs) on
the decompounding task. We find that LLMs perform poorly, especially on words
which are tokenized unfavorably by subword tokenization. We thus introduce a
novel methodology to train dedicated models for decompounding. The proposed
two-stage procedure relies on a fully self-supervised objective in the first
stage, while the second, supervised learning stage optionally fine-tunes the
model on the annotated Wiktionary data. Our self-supervised models outperform
the prior best unsupervised decompounding models by 13.9% accuracy on average.
Our fine-tuned models outperform all prior (language-specific) decompounding
tools. Furthermore, we use our models to leverage decompounding during the
creation of a subword tokenizer, which we refer to as CompoundPiece.
CompoundPiece tokenizes compound words more favorably on average, leading to
improved performance on decompounding over an otherwise equivalent model using
SentencePiece tokenization.Comment: EMNLP 202