8 research outputs found

    Incorporating word embeddings in unsupervised morphological segmentation

    Get PDF
    We investigate the usage of semantic information for morphological segmentation since words that are derived from each other will remain semantically related. We use mathematical models such as maximum likelihood estimate (MLE) and maximum a posteriori estimate (MAP) by incorporating semantic information obtained from dense word vector representations. Our approach does not require any annotated data which make it fully unsupervised and require only a small amount of raw data together with pretrained word embeddings for training purposes. The results show that using dense vector representations helps in morphological segmentation especially for low-resource languages. We present results for Turkish, English, and German. Our semantic MLE model outperforms other unsupervised models for Turkish language. Our proposed models could be also used for any other low-resource language with concatenative morphology.</p

    Incorporating word embeddings in unsupervised morphological segmentation

    Get PDF
    This is an accepted manuscript of an article published by Cambridge University Press in Natural Language Engineering on 10/07/2020, available online: https://doi.org/10.1017/S1351324920000406 The accepted version of the publication may differ from the final published version.© The Author(s), 2020. Published by Cambridge University Press. We investigate the usage of semantic information for morphological segmentation since words that are derived from each other will remain semantically related. We use mathematical models such as maximum likelihood estimate (MLE) and maximum a posteriori estimate (MAP) by incorporating semantic information obtained from dense word vector representations. Our approach does not require any annotated data which make it fully unsupervised and require only a small amount of raw data together with pretrained word embeddings for training purposes. The results show that using dense vector representations helps in morphological segmentation especially for low-resource languages. We present results for Turkish, English, and German. Our semantic MLE model outperforms other unsupervised models for Turkish language. Our proposed models could be also used for any other low-resource language with concatenative morphology.This research was supported by TUBITAK (The Scientific and Technological Research Council of Turkey) with grant number 115E464.Published versio

    Building and Evaluating Open-Vocabulary Language Models

    Get PDF
    Language models have always been a fundamental NLP tool and application. This thesis focuses on open-vocabulary language models, i.e., models that can deal with novel and unknown words at runtime. We will propose both new ways to construct such models as well as use such models in cross-linguistic evaluations to answer questions of difficulty and language-specificity in modern NLP tools. We start by surveying linguistic background as well as past and present NLP approaches to tokenization and open-vocabulary language modeling (Mielke et al., 2021). Thus equipped, we establish desirable principles for such models, both from an engineering mindset as well as a linguistic one and hypothesize a model based on the marriage of neural language modeling and Bayesian nonparametrics to handle a truly infinite vocabulary, boasting attractive theoretical properties and mathematical soundness, but presenting practical implementation difficulties. As a compromise, we thus introduce a word-based two-level language model that still has many desirable characteristics while being highly feasible to run (Mielke and Eisner, 2019). Unlike the more dominant approaches of characters or subword units as one-layer tokenization it uses words; its key feature is the ability to generate novel words in context and in isolation. Moving on to evaluation, we ask: how do such models deal with the wide variety of languages of the world---are they struggling with some languages? Relating this question to a more linguistic one, are some languages inherently more difficult to deal with? Using simple methods, we show that indeed they are, starting with a small pilot study that suggests typological predictors of difficulty (Cotterell et al., 2018). Thus encouraged, we design a far bigger study with more powerful methodology, a principled and highly feasible evaluation and comparison scheme based again on multi-text likelihood (Mielke et al., 2019). This larger study shows that the earlier conclusion of typological predictors is difficult to substantiate, but also offers a new insight on the complexity of Translationese. Following that theme, we end by extending this scheme to machine translation models to answer questions traditional evaluation metrics like BLEU cannot (Bugliarello et al., 2020)

    Catching words in a stream of speach:computational simulations of segmenting transcribed child-directed speech

    Get PDF
    De segmentatie van continue spraak in lexicale eenheden is één van de eerste vaardigheden die een kind moet leren gedurende de taalverwerving. Dit proefschrift onderzoekt segmentatie met behulp van computationeel modelleren en computationele simulaties. Segmentatie is moeilijker dan het op het eerste gezicht kan lijken. Kinderen moeten woorden vinden in een continue stroom van spraak, zonder kennis van woorden te hebben. Gelukkig laten experimentele studies zien dat kinderen en volwassen een aantal aanwijzingen uit de invoer gebruiken, alsmede simpele strategieën die gebruik maken van deze aanwijzingen, om spraak te segmenteren. Nog interessanter is dat een aantal van deze aanwijzingen taal-onafhankelijk zijn, waardoor een taalverwerver continue input kan segmenteren voordat het een enkel woord kent. De modellen die in dit proefschrift voorgesteld worden, verschillen op twee belangrijke vlakken van modellen uit de literatuur. Ten eerste gebruiken ze lokale strategieën – in tegenstelling tot globale optimalisatie – die gebruik maken van aanwijzingen waarvan bekend is dat kinderen ze gebruiken, namelijk voorspelbaarheidsstatistieken, fonotactiek en lexicale beklemtoning. Ten tweede worden deze aanwijzingen gecombineerd met behulp van een expliciet aanwijzing-combinatie model, dat eenvoudig uitgebreid kan worden met meer aanwijzingen

    Linguistic Structure as Composition and Perturbation

    No full text
    This paper discusses the problem of learning language from unprocessed text and speech signals, concentrating on the problem of learning a lexicon. In particular, it argues for a representation of language in which linguistic parameters like words are built by perturbing a composition of existing parameters. The power of the representation is demonstrated by several examples in text segmentation and compression, acquisition of a lexicon from raw speech, and the acquisition of mappings between text and artificial representations of meaning
    corecore