46 research outputs found
Disambiguatory Signals are Stronger in Word-initial Positions
Psycholinguistic studies of human word processing and lexical access provide
ample evidence of the preferred nature of word-initial versus word-final
segments, e.g., in terms of attention paid by listeners (greater) or the
likelihood of reduction by speakers (lower). This has led to the conjecture --
as in Wedel et al. (2019b), but common elsewhere -- that languages have evolved
to provide more information earlier in words than later. Information-theoretic
methods to establish such tendencies in lexicons have suffered from several
methodological shortcomings that leave open the question of whether this high
word-initial informativeness is actually a property of the lexicon or simply an
artefact of the incremental nature of recognition. In this paper, we point out
the confounds in existing methods for comparing the informativeness of segments
early in the word versus later in the word, and present several new measures
that avoid these confounds. When controlling for these confounds, we still find
evidence across hundreds of languages that indeed there is a cross-linguistic
tendency to front-load information in words.Comment: Accepted at EACL 2021. Code is available in
https://github.com/tpimentelms/frontload-disambiguatio
Are All Languages Equally Hard to Language-Model?
How cross-linguistically applicable are NLP models, specifically language models? A fair comparison between languages is tricky: not only do training corpora in different languages have different sizes and topics, some of which may be harder to predict than others, but standard metrics for language modeling depend on the orthography of a language. We argue for a fairer metric based on the bits per utterance using utterance-aligned multi-text. We conduct a study on 21 languages, training and testing both n-gram and LSTM language models on “the same” set of utterances in each language (modulo translation), demonstrating that in some languages, especially those with complex inflectional morphology, the textual expression of the information is harder to predict
Meaning to Form: Measuring Systematicity as Information
A longstanding debate in semiotics centers on the relationship between
linguistic signs and their corresponding semantics: is there an arbitrary
relationship between a word form and its meaning, or does some systematic
phenomenon pervade? For instance, does the character bigram \textit{gl} have
any systematic relationship to the meaning of words like \textit{glisten},
\textit{gleam} and \textit{glow}? In this work, we offer a holistic
quantification of the systematicity of the sign using mutual information and
recurrent neural networks. We employ these in a data-driven and massively
multilingual approach to the question, examining 106 languages. We find a
statistically significant reduction in entropy when modeling a word form
conditioned on its semantic representation. Encouragingly, we also recover
well-attested English examples of systematic affixes. We conclude with the
meta-point: Our approximate effect size (measured in bits) is quite
small---despite some amount of systematicity between form and meaning, an
arbitrary relationship and its resulting benefits dominate human language.Comment: Accepted for publication at ACL 201
SIGMORPHON 2021 Shared Task on Morphological Reinflection: Generalization Across Languages
This year's iteration of the SIGMORPHON Shared Task on morphological reinflection focuses on typological diversity and cross-lingual variation of morphosyntactic features. In terms of the task, we enrich UniMorph with new data for 32 languages from 13 language families, with most of them being under-resourced: Kunwinjku, Classical Syriac, Arabic (Modern Standard, Egyptian, Gulf), Hebrew, Amharic, Aymara, Magahi, Braj, Kurdish (Central, Northern, Southern), Polish, Karelian, Livvi, Ludic, Veps, Võro, Evenki, Xibe, Tuvan, Sakha, Turkish, Indonesian, Kodi, Seneca, Asháninka, Yanesha, Chukchi, Itelmen, Eibela. We evaluate six systems on the new data and conduct an extensive error analysis of the systems' predictions. Transformer-based models generally demonstrate superior performance on the majority of languages, achieving >90% accuracy on 65% of them. The languages on which systems yielded low accuracy are mainly under-resourced, with a limited amount of data. Most errors made by the systems are due to allomorphy, honorificity, and form variation. In addition, we observe that systems especially struggle to inflect multiword lemmas. The systems also produce misspelled forms or end up in repetitive loops (e.g., RNN-based models). Finally, we report a large drop in systems' performance on previously unseen lemmas.Peer reviewe
UniMorph 4.0:Universal Morphology
The Universal Morphology (UniMorph) project is a collaborative effort providing broad-coverage instantiated normalized morphological inflection tables for hundreds of diverse world languages. The project comprises two major thrusts: a language-independent feature schema for rich morphological annotation and a type-level resource of annotated data in diverse languages realizing that schema. This paper presents the expansions and improvements made on several fronts over the last couple of years (since McCarthy et al. (2020)). Collaborative efforts by numerous linguists have added 67 new languages, including 30 endangered languages. We have implemented several improvements to the extraction pipeline to tackle some issues, e.g. missing gender and macron information. We have also amended the schema to use a hierarchical structure that is needed for morphological phenomena like multiple-argument agreement and case stacking, while adding some missing morphological features to make the schema more inclusive. In light of the last UniMorph release, we also augmented the database with morpheme segmentation for 16 languages. Lastly, this new release makes a push towards inclusion of derivational morphology in UniMorph by enriching the data and annotation schema with instances representing derivational processes from MorphyNet