225 research outputs found

    A computer-assisted pproach to the comparison of mainland southeast Asian languages

    Get PDF
    This cumulative thesis is based on three separate projects based on a computer-assisted language comparison (CALC) framework to address common obstacles to studying the history of Mainland Southeast Asian (MSEA) languages, such as sparse and non-standardized lexical data, as well as an inadequate method of cognate judgments, and to provide caveats to scholars who will use Bayesian phylogenetic analysis. The first project provides a format that standardizes the sound inventories, regulates language labels, and clarifies lexical items. This standardized format allows us to merge various forms of raw data. The format also summarizes information to assist linguists in researching the relatedness among words and inferring relationships among languages. The second project focuses on increasing the transparency of lexical data and cognate judg- ments with regard to compound words. The method enables the annotation of each part of a word with semantic meanings and syntactic features. In addition, four different conversion methods were developed to convert morpheme cognates into word cognates for input into the Bayesian phylogenetic analysis. The third project applies the methods used in the first project to create a workflow by merging linguistic data sets and inferring a language tree using a Bayesian phylogenetic algorithm. Further- more, the project addresses the importance of integrating cross-disciplinary studies into historical linguistic research. Finally, the methods we proposed for managing lexical data for MSEA languages are discussed and summarized in six perspectives. The work can be seen as a milestone in reconstructing human prehistory in an area that has high linguistic and cultural diversity

    The potential of automatic word comparison for historical linguistics

    Get PDF
    The amount of data from languages spoken all over the world is rapidly increasing. Traditional manual methods in historical linguistics need to face the challenges brought by this influx of data. Automatic approaches to word comparison could provide invaluable help to pre-analyze data which can be later enhanced by experts. In this way, computational approaches can take care of the repetitive and schematic tasks leaving experts to concentrate on answering interesting questions. Here we test the potential of automatic methods to detect etymologically related words (cognates) in cross-linguistic data. Using a newly compiled database of expert cognate judgments across five different language families, we compare how well different automatic approaches distinguish related from unrelated words. Our results show that automatic methods can identify cognates with a very high degree of accuracy, reaching 89% for the best-performing method Infomap. We identify the specific strengths and weaknesses of these different methods and point to major challenges for future approaches. Current automatic approaches for cognate detection-although not perfect -could become an important component of future research in historical linguistics.As part of the GlottoBank Project, this work was supported by the Max Planck Institute for the Science of Human History and the Royal Society of New Zealand Marsden Fund grant 13¬UOA-121. This paper was further supported by the DFG research fellowship grant 261553824 “Vertical and lateral aspects of Chinese dialect history”(JML), and the Australian Research Council’s Discovery Projects funding scheme (project number DE120101954, SJG)

    Computational Approaches to Historical Language Comparison

    Get PDF
    The chapter discusses recently developed computational techniques providing concrete help in addressing various tasks in historical language comparison, focusing specifically on those tasks which are typically subsumed under the framework of the comparative method. These include the proof of relationship, cognate and correspondence detection, phonological reconstruction and sound law induction, and the reconstruction of evolutionary scenarios

    Traceback and Chunk-Based Learning

    Get PDF
    Recent years have seen increased interest in code-mixing from a usage-based perspective. In usage-based approaches to monolingual language acquisition, a number of methods have been developed that allow for detecting patterns from usage data. In this paper, we evaluate two of those methods with regard to their performance when applied to code-mixing data: the traceback method, as well as the chunk-based learner model. Both methods make it possible to automatically detect patterns in speech data. In doing so, however, they place different theoretical emphases: while traceback focuses on frame-and-slot patterns, chunk-based learner focuses on chunking processes. Both methods are applied to the code-mixing of a German–English bilingual child between the ages of 2;3 and 3;11. Advantages and disadvantages of both methods will be discussed, and the results will be interpreted against the background of usage-based approaches

    Ontology verbalization in agglutinating Bantu languages: a study of Runyankore and its generalizability

    Get PDF
    Natural Language Generation (NLG) systems have been developed to generate text in multiple domains, including personalized patient information. However, their application is limited in Africa because they generate text in English, yet indigenous languages are still predominantly spoken throughout the continent, especially in rural areas. The existing healthcare NLG systems cannot be reused for Bantu languages due to the complex grammatical structure, nor can the generated text be used in machine translation systems for Bantu languages because they are computationally under-resourced. This research aimed to verbalize ontologies in agglutinating Bantu languages. We had four research objectives: (1) noun pluralization and verb conjugation in Runyankore; (2) Runyankore verbalization patterns for the selected description logic constructors; (3) combining the pluralization, conjugation, and verbalization components to form a Runyankore grammar engine; and (4) generalizing the Runyankore and isiZulu approaches to ontology verbalization to other agglutinating Bantu languages. We used an approach that combines morphology with syntax and semantics to develop a noun pluralizer for Runyankore, and used Context-Free Grammars (CFGs) for verb conjugation. We developed verbalization algorithms for eight constructors in a description logic. We then combined these components into a grammar engine developed as a ProtĂ©gĂ©5X plugin. The investigation into generalizability used the bootstrap approach, and investigated bootstrapping for languages in the same language zone (intra-zone bootstrappability) and languages across language zones (inter-zone bootstrappability). We obtained verbalization patterns for Luganda and isiXhosa, in the same zones as Runyankore and isiZulu respectively, and chiShona, Kikuyu, and Kinyarwanda from different zones, and used the bootstrap metric that we developed to identify the most efficient source—target bootstrap pair. By regrouping Meinhof’s noun class system we were able to eliminate non-determinism during computation, and this led to the development of a generic noun pluralizer. We also showed that CFGs can conjugate verbs in the five additional languages. Finally, we proposed the architecture for an API that could be used to generate text in agglutinating Bantu languages. Our research provides a method for surface realization for an under-resourced and grammatically complex family of languages, Bantu languages. We leave the development of a complete NLG system based on the Runyankore grammar engine and of the API as areas for future work

    Automated identification of borrowings in multilingual wordlists

    Get PDF
    Although lexical borrowing is an important aspect of language evolution, there have been few attempts to automate the identification of borrowings in lexical datasets. Moreover, none of the solutions which have been proposed so far identify borrowings across multiple languages. This study proposes a new method for the task and tests it on a newly compiled large comparative dataset of 48 South-East Asian languages from Southern China. The method yields very promising results, while it is conceptually straightforward and easy to apply. This makes the approach a perfect candidate for computer-assisted exploratory studies on lexical borrowing in contact areas

    Computer-Assisted Language Comparison in Practice. Tutorials on Computational Approaches to the History and Diversity of Languages. Volume I

    Get PDF
    This document summarizes all contributions to the blog "Computer-Assisted Language Comparison in Practice" from 2018, online also available under https://calc.hypotheses.org

    Induction of the morphology of natural language : unsupervised morpheme segmentation with application to automatic speech recognition

    Get PDF
    In order to develop computer applications that successfully process natural language data (text and speech), one needs good models of the vocabulary and grammar of as many languages as possible. According to standard linguistic theory, words consist of morphemes, which are the smallest individually meaningful elements in a language. Since an immense number of word forms can be constructed by combining a limited set of morphemes, the capability of understanding and producing new word forms depends on knowing which morphemes are involved (e.g., "water, water+s, water+y, water+less, water+less+ness, sea+water"). Morpheme boundaries are not normally marked in text unless they coincide with word boundaries. The main objective of this thesis is to devise a method that discovers the likely locations of the morpheme boundaries in words of any language. The method proposed, called Morfessor, learns a simple model of concatenative morphology (word forming) in an unsupervised manner from plain text. Morfessor is formulated as a Bayesian, probabilistic model. That is, it does not rely on predefined grammatical rules of the language, but makes use of statistical properties of the input text. Morfessor situates itself between two types of existing unsupervised methods: morphology learning vs. word segmentation algorithms. In contrast to existing morphology learning algorithms, Morfessor can handle words consisting of a varying and possibly high number of morphemes. This is a requirement for coping with highly-inflecting and compounding languages, such as Finnish. In contrast to existing word segmentation methods, Morfessor learns a simple grammar that takes into account sequential dependencies, which improves the quality of the proposed segmentations. Morfessor is evaluated in two complementary ways in this work: directly by comparing to linguistic reference morpheme segmentations of Finnish and English words and indirectly as a component of a large (or virtually unlimited) vocabulary Finnish speech recognition system. In both cases, Morfessor is shown to outperform state-of-the-art solutions. The linguistic reference segmentations were produced as part of the current work, based on existing linguistic resources. This has resulted in a morphological gold standard, called Hutmegs, containing analyses of a large number of Finnish and English word forms.reviewe

    Functional connectivity of spoken language processing in early-stage Parkinson’s disease : an MEG study

    Get PDF
    Parkinson's disease (PD) is a neurodegenerative disorder, well-known for its motor symptoms; however, it also adversely affects cognitive functions, including language, a highly important human ability. PD pathology is associated, even in the early stage of the disease, with alterations in the functional connectivity within corticosubcortical circuitry of the basal ganglia as well as within cortical networks. Here, we investigated functional cortical connectivity related to spoken language processing in early-stage PD patients. We employed a patientfriendly passive attention-free paradigm to probe neurophysiological correlates of language processing in PD patients without confounds related to active attention and overt motor responses. MEG data were recorded from a group of newly diagnosed PD patients and age-matched healthy controls who were passively presented with spoken word stimuli (action and abstract verbs, as well as grammatically correct and incorrect inflectional forms) while focussing on watching a silent movie. For each of the examined linguistic aspects, a logistic regression classifier was used to classify participants as either PD patients or healthy controls based on functional connectivity within the temporo-fronto-parietal cortical language networks. Classification was successful for action verbs (accuracy = 0.781, p-value = 0.003) and, with lower accuracy, for abstract verbs (accuracy = 0.688, pvalue = 0.041) and incorrectly inflected forms (accuracy = 0.648, p-value = 0.021), but not for correctly inflected forms (accuracy = 0.523, p-value = 0.384). Our findings point to quantifiable differences in functional connectivity within the cortical systems underpinning language processing in newly diagnosed PD patients compared to healthy controls, which arise early, in the absence of clinical evidence of deficits in cognitive or general language functions. The techniques presented here may aid future work on establishing neurolinguistic markers to objectively and noninvasively identify functional changes in the brain's language networks even before clinical symptoms emerge.Peer reviewe
    • 

    corecore