19 research outputs found

    Russian Lexicographic Landscape: a Tale of 12 Dictionaries

    Full text link
    The paper reports on quantitative analysis of 12 Russian dictionaries at three levels: 1) headwords: The size and overlap of word lists, coverage of large corpora, and presence of neologisms; 2) synonyms: Overlap of synsets in different dictionaries; 3) definitions: Distribution of definition lengths and numbers of senses, as well as textual similarity of same-headword definitions in different dictionaries. The total amount of data in the study is 805,900 dictionary entries, 892,900 definitions, and 84,500 synsets. The study reveals multiple connections and mutual influences between dictionaries, uncovers differences in modern electronic vs. traditional printed resources, as well as suggests directions for development of new and improvement of existing lexical semantic resources

    Количественный анализ лексики английского языка в викисловарях и Wordnet.

    Get PDF
    A quantitative analysis of the English lexicon was performed in the paper. The three electronic dictionaries are under examination: the English Wiktionary, WordNet, and the Russian Wiktionary. The quantity of English words and their meanings (senses) are calculated. The distribution of words for each part of speech, the quantity of monosemous and polysemous words and the distribution of words by number of meanings were calculated and compared across these dictionaries. The analysis shows that the average polysemy, the number and the distribution of word senses follow similar patterns in both expert and collaborative resources with relatively minor differences.В работе выполнен количественный анализ лексики английского языка по данным трѐх электронных словарей: Английского Викисловаря, WordNet и Русского Викисловаря. Сравнивается объѐм словарей и распределение слов английского языка по частям речи. Приводится соотношение многозначных слов и слов с одним значением, а также распределение английских слов по числу значений. Эксперименты показывают, что лингвистические ресурсы, созданные как экспертами, так и энтузиастами, подчиняются общим законам

    YaMTG: An Open-Source Heavily Multilingual Translation Graph Extracted from Wiktionaries and Parallel Corpora

    Get PDF
    International audienceThis paper describes YaMTG (Yet another Multilingual Translation Graph), a new open-source heavily multilingual translation database (over 664 languages represented) built using several sources, namely various wiktionaries and the OPUS parallel corpora (Tiedemann, 2009). We detail the translation extraction process for 21 wiktionary language editions, and provide an evaluation of the translations contained in YaMTG

    Количественный анализ лексики русского WordNet и викисловарей

    Get PDF
    A quantitative analysis of the Russian lexicon was performed in the paper. The thesaurus Russian WordNet and two electronic dictionaries are under examination: the Russian Wiktionary and the English Wiktionary. The quantity of Russian words and their meanings (senses) according to the parts of speech are compared. The distribution of words for each part of speech, the quantity of monosemous and polysemous words and the distribution of words by number of meanings were calculated and compared across these dictionaries. The analysis of the distribution of words by number of meanings revealed a problem that too few or no ambigous Russian words with the number of meanings more than 4 are presented in the English Wiktionary (in comparison with the Russian Wiktionary). The analysis shows that the average polysemy, the number and the distribution of word senses follow similar patterns in both expert and collaborative resources with relatively minor differences.В работе выполнен количественный анализ лексики русского языка по данным тезауруса Русский WordNet и двух электронных словарей (Русский Викисловарь и Английский Викисловарь). Сравнивается объём словарей и распределение слов русского языка по частям речи. Приводится соотношение многозначных слов и слов с одним значением, а также распределение русских слов по числу значений. Анализ распределения числа значений выявил проблему Английского Викисловаря – отсутствие или недостаточная проработка многозначных русских слов с числом значений больше четырёх (по сравнению со словами Русского Викисловаря). Эксперименты показывают, что лингвистические ресурсы, созданные энтузиастами, демонстрируют те же закономерности, что и традиционные словари

    Constructing a poor man’s wordnet in a resource-rich world

    Get PDF
    International audienceIn this paper we present a language-independent, fully modular and automatic approach to bootstrap a wordnet for a new language by recycling different types of already existing language resources, such as machine-readable dictionaries, parallel corpora, and Wikipedia. The approach, which we apply here to Slovene, takes into account monosemous and polysemous words, general and specialised vocabulary as well as simple and multi-word lexemes. The extracted words are then assigned one or several synset ids, based on a classifier that relies on several features including distributional similarity. Finally, we identify and remove highly dubious (literal, synset) pairs, based on simple distributional information extracted from a large corpus in an unsupervised way. Automatic, manual and task-based evaluations show that the resulting resource, the latest version of the Slovene wordnet, is already a valuable source of lexico-semantic information

    Автоматическое извлечение словарных помет из Русского Викисловаря

    Get PDF
    The methodology of extracting context labels from internet dictionaries was developed. In accordance with this methodology experts constructed a mapping table that establishes a correspondence between Russian Wiktionary context labels (385 labels) and English Wiktionary context labels (1001 labels). As a result the composite system of context labels (1096 labels), which includes both dictionary labels, was constructed. The parser extracting context labels from the Russian Wiktionary was developed. The parser can recognize and extract new context labels, abbreviations and comments placed before the definition in Wiktionary articles. One outstanding feature of this parser is a large number of context labels which are known in advance (385 context labels for Russian Wiktionary). The parser can recognize and extract new context labels, abbreviations and comments placed before the definition in Wiktionary articles. The database of machine-readable Russian Wiktionary including context labels was generated by the parser. An evaluation of numerical parameters of context labels in the Russian Wiktionary was performed. With the help of the developed computer program it was found in the Russian Wiktionary that (1) there are 133 000 definitions with context labels and comments, (2) one and a half thousand definitions were supplied with regional labels, (3) it was calculated a number of definitions with labels for each domain knowledge. This paper is an original contribution to computational lexicography, setting out for the first time an analysis of numerical parameters of context labels in the large dictionary (500 000 entries).Разработана методология извлечения словарных помет из интернет-словарей. В соответствие с этой методологией экспертами построено отображение (соответствие один к одному) системы словарных помет Русского Викисловаря (385 помет) и системы словарных помет Английского Викисловаря (1001 помета). Таким образом, построена интегральная система словарных помет (1096 помет), включающая пометы обоих словарей. Разработан синтаксический анализатор (парсер), который распознаёт и извлекает известные и новые словарные пометы, сокращения и пояснения, указанные в начале текста значений слов в словарных статьях Викисловаря. Следует отметить наличие в парсере большого количества словарных помет известных заранее (385 словарных помет для Русского Викисловаря). С помощью парсера на основе данных Русского Викисловаря была построена база данных машиночитаемого Викисловаря, включающая информацию о словарных пометах. В работе приводятся численные параметры словарных помет в Русском Викисловаре, а именно: с помощью разработанной программы было подсчитано, что в базе данных машиночитаемого Викисловаря к 133 тыс. значений слов приписаны пометы и пояснения; для полутора тысяч значений слов был указан регион употребления слова, подсчитано число словарных помет для разных предметных областей. Вкладом данной работы в компьютерную лексикографию является оценка численных параметров словарных помет в больших словарях (пятьсот тысяч словарных статей)

    Wiktionary: The Metalexicographic and the Natural Language Processing Perspective

    Get PDF
    Dictionaries are the main reference works for our understanding of language. They are used by humans and likewise by computational methods. So far, the compilation of dictionaries has almost exclusively been the profession of expert lexicographers. The ease of collaboration on the Web and the rising initiatives of collecting open-licensed knowledge, such as in Wikipedia, caused a new type of dictionary that is voluntarily created by large communities of Web users. This collaborative construction approach presents a new paradigm for lexicography that poses new research questions to dictionary research on the one hand and provides a very valuable knowledge source for natural language processing applications on the other hand. The subject of our research is Wiktionary, which is currently the largest collaboratively constructed dictionary project. In the first part of this thesis, we study Wiktionary from the metalexicographic perspective. Metalexicography is the scientific study of lexicography including the analysis and criticism of dictionaries and lexicographic processes. To this end, we discuss three contributions related to this area of research: (i) We first provide a detailed analysis of Wiktionary and its various language editions and dictionary structures. (ii) We then analyze the collaborative construction process of Wiktionary. Our results show that the traditional phases of the lexicographic process do not apply well to Wiktionary, which is why we propose a novel process description that is based on the frequent and continual revision and discussion of the dictionary articles and the lexicographic instructions. (iii) We perform a large-scale quantitative comparison of Wiktionary and a number of other dictionaries regarding the covered languages, lexical entries, word senses, pragmatic labels, lexical relations, and translations. We conclude the metalexicographic perspective by finding that the collaborative Wiktionary is not an appropriate replacement for expert-built dictionaries due to its inconsistencies, quality flaws, one-fits-all-approach, and strong dependence on expert-built dictionaries. However, Wiktionary's rapid and continual growth, its high coverage of languages, newly coined words, domain-specific vocabulary and non-standard language varieties, as well as the kind of evidence based on the authors' intuition provide promising opportunities for both lexicography and natural language processing. In particular, we find that Wiktionary and expert-built wordnets and thesauri contain largely complementary entries. In the second part of the thesis, we study Wiktionary from the natural language processing perspective with the aim of making available its linguistic knowledge for computational applications. Such applications require vast amounts of structured data with high quality. Expert-built resources have been found to suffer from insufficient coverage and high construction and maintenance cost, whereas fully automatic extraction from corpora or the Web often yields resources of limited quality. Collaboratively built encyclopedias present a viable solution, but do not cover well linguistically oriented knowledge as it is found in dictionaries. That is why we propose extracting linguistic knowledge from Wiktionary, which we achieve by the following three main contributions: (i) We propose the novel multilingual ontology OntoWiktionary that is created by extracting and harmonizing the weakly structured dictionary articles in Wiktionary. A particular challenge in this process is the ambiguity of semantic relations and translations, which we resolve by automatic word sense disambiguation methods. (ii) We automatically align Wiktionary with WordNet 3.0 at the word sense level. The largely complementary information from the two dictionaries yields an aligned resource with higher coverage and an enriched representation of word senses. (iii) We represent Wiktionary according to the ISO standard Lexical Markup Framework, which we adapt to the peculiarities of collaborative dictionaries. This standardized representation is of great importance for fostering the interoperability of resources and hence the dissemination of Wiktionary-based research. To this end, our work presents a foundational step towards the large-scale integrated resource UBY, which facilitates a unified access to a number of standardized dictionaries by means of a shared web interface for human users and an application programming interface for natural language processing applications. A user can, in particular, switch between and combine information from Wiktionary and other dictionaries without completely changing the software. Our final resource and the accompanying datasets and software are publicly available and can be employed for multiple different natural language processing applications. It particularly fills the gap between the small expert-built wordnets and the large amount of encyclopedic knowledge from Wikipedia. We provide a survey of previous works utilizing Wiktionary, and we exemplify the usefulness of our work in two case studies on measuring verb similarity and detecting cross-lingual marketing blunders, which make use of our Wiktionary-based resource and the results of our metalexicographic study. We conclude the thesis by emphasizing the usefulness of collaborative dictionaries when being combined with expert-built resources, which bears much unused potential

    Computational Etymology: Word Formation and Origins

    Get PDF
    While there are over seven thousand languages in the world, substantial language technologies exist only for a small percentage of these. The large majority of world languages do not have enough bilingual or even monolingual data for developing technologies like machine translation using current approaches. The computational study and modeling of word origins and word formation is a key step in developing comprehensive translation dictionaries for low-resource languages. This dissertation presents novel foundational work in computational etymology, a promising field which this work is pioneering. The dissertation also includes novel models of core vocabulary, dictionary information distillation, and of the diverse linguistic processes of word formation and concept realization between languages, including compounding, derivation, sense-extension, borrowing, and historical cognate relationships, utilizing statistical and neural models trained on the unprecedented scale of thousands of languages. Collectively these are important components in tackling the grand challenges of universal translation, endangered language documentation and revitalization, and supporting technologies for speakers of thousands of underserved languages
    corecore