52 research outputs found

    Exploration of Corpus Augmentation Approach for English-Hindi Bidirectional Statistical Machine Translation System

    Get PDF
    Even though lot of Statistical Machine Translation(SMT) research work is happening for English-Hindi language pair, there is no effort done to standardize the dataset. Each of the research work uses different dataset, different parameters and different number of sentences during various phases of translation resulting in varied translation output. So comparing  these models, understand the result of these models, to get insight into corpus behavior for these models, regenerating the result of these research work  becomes tedious. This necessitates the need for standardization of dataset and to identify the common parameter for the development of model.  The main contribution of this paper is to discuss an approach to standardize the dataset and to identify the best parameter which in combination gives best performance. It also investigates a novel corpus augmentation approach to improve the translation quality of English-Hindi bidirectional statistical machine translation system. This model works well for the scarce resource without incorporating the external parallel data corpus of the underlying language.  This experiment is carried out using Open Source phrase-based toolkit Moses. Indian Languages Corpora Initiative (ILCI) Hindi-English tourism corpus is used.  With limited dataset, considerable improvement is achieved using the corpus augmentation approach for the English-Hindi bidirectional SMT system

    Автоматическое извлечение словарных помет из Русского Викисловаря

    Get PDF
    The methodology of extracting context labels from internet dictionaries was developed. In accordance with this methodology experts constructed a mapping table that establishes a correspondence between Russian Wiktionary context labels (385 labels) and English Wiktionary context labels (1001 labels). As a result the composite system of context labels (1096 labels), which includes both dictionary labels, was constructed. The parser extracting context labels from the Russian Wiktionary was developed. The parser can recognize and extract new context labels, abbreviations and comments placed before the definition in Wiktionary articles. One outstanding feature of this parser is a large number of context labels which are known in advance (385 context labels for Russian Wiktionary). The parser can recognize and extract new context labels, abbreviations and comments placed before the definition in Wiktionary articles. The database of machine-readable Russian Wiktionary including context labels was generated by the parser. An evaluation of numerical parameters of context labels in the Russian Wiktionary was performed. With the help of the developed computer program it was found in the Russian Wiktionary that (1) there are 133 000 definitions with context labels and comments, (2) one and a half thousand definitions were supplied with regional labels, (3) it was calculated a number of definitions with labels for each domain knowledge. This paper is an original contribution to computational lexicography, setting out for the first time an analysis of numerical parameters of context labels in the large dictionary (500 000 entries).Разработана методология извлечения словарных помет из интернет-словарей. В соответствие с этой методологией экспертами построено отображение (соответствие один к одному) системы словарных помет Русского Викисловаря (385 помет) и системы словарных помет Английского Викисловаря (1001 помета). Таким образом, построена интегральная система словарных помет (1096 помет), включающая пометы обоих словарей. Разработан синтаксический анализатор (парсер), который распознаёт и извлекает известные и новые словарные пометы, сокращения и пояснения, указанные в начале текста значений слов в словарных статьях Викисловаря. Следует отметить наличие в парсере большого количества словарных помет известных заранее (385 словарных помет для Русского Викисловаря). С помощью парсера на основе данных Русского Викисловаря была построена база данных машиночитаемого Викисловаря, включающая информацию о словарных пометах. В работе приводятся численные параметры словарных помет в Русском Викисловаре, а именно: с помощью разработанной программы было подсчитано, что в базе данных машиночитаемого Викисловаря к 133 тыс. значений слов приписаны пометы и пояснения; для полутора тысяч значений слов был указан регион употребления слова, подсчитано число словарных помет для разных предметных областей. Вкладом данной работы в компьютерную лексикографию является оценка численных параметров словарных помет в больших словарях (пятьсот тысяч словарных статей)

    Code Mixed Cross Script Factoid Question Classification - A Deep Learning Approach

    Full text link
    [EN] Before the advent of the Internet era, code-mixing was mainly used in the spoken form. However, with the recent popular informal networking platforms such as Facebook, Twitter, Instagram, etc., in social media, code-mixing is being used more and more in written form. User-generated social media content is becoming an increasingly important resource in applied linguistics. Recent trends in social media usage have led to a proliferation of studies on social media content. Multilingual social media users often write native language content in non-native script (cross-script). Recently Banerjee et al. [9] introduced the code-mixed cross-script question answering research problem and reported that the ever increasing social media content could serve as a potential digital resource for less-computerized languages to build question answering systems. Question classification is a core task in question answering in which questions are assigned a class or a number of classes which denote the expected answer type(s). In this research work, we address the question classification task as part of the code-mixed cross-script question answering research problem. We combine deep learning framework with feature engineering to address the question classification task and enhance the state-of-the-art question classification accuracy by over 4% for code-mixed cross-script questions.The work of the third author was partially supported by the SomEMBED TIN2015-71147-C2-1-P MINECO research project.Banerjee, S.; Kumar Naskar, S.; Rosso, P.; Bandyopadhyay, S. (2018). Code Mixed Cross Script Factoid Question Classification - A Deep Learning Approach. Journal of Intelligent & Fuzzy Systems. 34(5):2959-2969. https://doi.org/10.3233/JIFS-169481S2959296934

    SenseDefs : a multilingual corpus of semantically annotated textual definitions

    Get PDF
    Definitional knowledge has proved to be essential in various Natural Language Processing tasks and applications, especially when information at the level of word senses is exploited. However, the few sense-annotated corpora of textual definitions available to date are of limited size: this is mainly due to the expensive and time-consuming process of annotating a wide variety of word senses and entity mentions at a reasonably high scale. In this paper we present SenseDefs, a large-scale high-quality corpus of disambiguated definitions (or glosses) in multiple languages, comprising sense annotations of both concepts and named entities from a wide-coverage unified sense inventory. Our approach for the construction and disambiguation of this corpus builds upon the structure of a large multilingual semantic network and a state-of-the-art disambiguation system: first, we gather complementary information of equivalent definitions across different languages to provide context for disambiguation; then we refine the disambiguation output with a distributional approach based on semantic similarity. As a result, we obtain a multilingual corpus of textual definitions featuring over 38 million definitions in 263 languages, and we publicly release it to the research community. We assess the quality of SenseDefs’s sense annotations both intrinsically and extrinsically on Open Information Extraction and Sense Clustering tasks.Peer reviewe
    corecore