451 research outputs found

    Creating a Korean Engineering Academic Vocabulary List (KEAVL): Computational Approach

    Get PDF
    With a growing number of international students in South Korea, the need for developing materials to study Korean for academic purposes is becoming increasingly pressing. According to statistics, engineering colleges in Korea attract the largest number of international students (Korean National Institute for International Education, 2018). However, despite the availability of technical vocabulary lists for some engineering sub-fields, a list of vocabulary common for the majority of the engineering sub-fields has not yet been built. Therefore, this study was aimed at creating a list of Korean academic vocabulary of engineering for non-native Korean speakers that may help future or first-year engineering students and engineers working in Korea. In order to compile this list, a corpus of Korean textbooks and research articles of 12 major engineering sub-fields, named as the Corpus of Korean Engineering Academic Texts (CKEAT), was compiled. Then, in order to analyze the corpus and compile the preliminary list, I designed a Python-based tool called KWordList. The KWordList lemmatizes all words in the corpus while excluding general Korean vocabulary included in the Korean Learner’s List (Jo, 2003). Then, for the remaining words, KWordList calculates the range, frequency, and dispersion (in this study deviation of proportions or DP (Gries, 2008)) and excludes words that do not pass the study’s criteria (range ≥ 6, frequency ≥ 100, DP ≤ 0.5). The final version of the list, called Korean Engineering Academic Vocabulary List or KEAVL, includes 830 lemmas (318 of intermediate level and 512 of advanced level). For each word, the collocations that occur more than 30 times in the corpus are provided. The comparison of the coverage of the Korean Academic Vocabulary List (Shin, 2004) and KEAVL based on the Corpus of Korean Engineering Academic Texts showed that KEAVL covers more lemmas in the corpus. Moreover, only 313 lemmas from the Korean Academic Vocabulary List (Shin, 2004) passed the criteria of the study. Therefore, KEAVL may be more efficient for engineering students’ vocabulary training than the Korean Academic Vocabulary List and may be used for the engineering Korean teaching materials and curriculum development. Moreover, the KWordList program written for the study can be used by other researchers, teachers, and even students and is open access (https://github.com/HelgaKr/KWordList)

    BlogForever D3.3: Development of the Digital Rights Management Policy

    Get PDF
    This report presents a set of recommended practices and approaches that a future BlogForever repository can use to develop a digital rights management policy. The report outlines core legal aspects of digital rights that might need consideration in developing policies, and what the challenges are, in particular, in relation to web archives and blog archives. These issues are discussed in the context of the digital information life cycle and steps that might be taken within the workflow of the BlogForever platform to facilitate the gathering and management of digital rights information. Further, the reports on interviews with experts in the field highlight current perspectives on rights management and provide empirical support for the recommendations that have been put forward

    The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection

    Full text link
    The SIGMORPHON 2019 shared task on cross-lingual transfer and contextual analysis in morphology examined transfer learning of inflection between 100 language pairs, as well as contextual lemmatization and morphosyntactic description in 66 languages. The first task evolves past years' inflection tasks by examining transfer of morphological inflection knowledge from a high-resource language to a low-resource language. This year also presents a new second challenge on lemmatization and morphological feature analysis in context. All submissions featured a neural component and built on either this year's strong baselines or highly ranked systems from previous years' shared tasks. Every participating team improved in accuracy over the baselines for the inflection task (though not Levenshtein distance), and every team in the contextual analysis task improved on both state-of-the-art neural and non-neural baselines.Comment: Presented at SIGMORPHON 201

    A review of NLIDB with deep learning: findings, challenges and open issues

    Get PDF
    Relational databases are storage for a massive amount of data. Knowledge of structured query language is a prior requirement to access that data. That is not possible for all non-technical personals, leading to the need for a system that translates text to SQL query itself rather than the user. Text to SQL task is also crucial because of its economic and industrial value. Natural Language Interface to Database (NLIDB) is the system that supports the text-to-SQL task. Developing the NLIDB system is a long-standing problem. Previously they were built based on domain-specific ontologies via pipelining methods. Recently a rising variety of Deep learning ideas and techniques brought this area to the attention again. Now end to end Deep learning models is being proposed for the task. Some publicly available datasets are being used for experimentation of the contributions, making the comparison process convenient. In this paper, we review the current work, summarize the research trends, and highlight challenging issues of NLIDB with Deep learning models. We discussed the importance of datasets, prediction model approaches and open challenges. In addition, methods and techniques are also summarized, along with their influence on the overall structure and performance of NLIDB systems. This paper can help future researchers start having prior knowledge of findings and challenges in NLIDB with Deep learning approaches

    Text-image synergy for multimodal retrieval and annotation

    Get PDF
    Text and images are the two most common data modalities found on the Internet. Understanding the synergy between text and images, that is, seamlessly analyzing information from these modalities may be trivial for humans, but is challenging for software systems. In this dissertation we study problems where deciphering text-image synergy is crucial for finding solutions. We propose methods and ideas that establish semantic connections between text and images in multimodal contents, and empirically show their effectiveness in four interconnected problems: Image Retrieval, Image Tag Refinement, Image-Text Alignment, and Image Captioning. Our promising results and observations open up interesting scopes for future research involving text-image data understanding.Text and images are the two most common data modalities found on the Internet. Understanding the synergy between text and images, that is, seamlessly analyzing information from these modalities may be trivial for humans, but is challenging for software systems. In this dissertation we study problems where deciphering text-image synergy is crucial for finding solutions. We propose methods and ideas that establish semantic connections between text and images in multimodal contents, and empirically show their effectiveness in four interconnected problems: Image Retrieval, Image Tag Refinement, Image-Text Alignment, and Image Captioning. Our promising results and observations open up interesting scopes for future research involving text-image data understanding.Text und Bild sind die beiden häufigsten Arten von Inhalten im Internet. Während es für Menschen einfach ist, gerade aus dem Zusammenspiel von Text- und Bildinhalten Informationen zu erfassen, stellt diese kombinierte Darstellung von Inhalten Softwaresysteme vor große Herausforderungen. In dieser Dissertation werden Probleme studiert, für deren Lösung das Verständnis des Zusammenspiels von Text- und Bildinhalten wesentlich ist. Es werden Methoden und Vorschläge präsentiert und empirisch bewertet, die semantische Verbindungen zwischen Text und Bild in multimodalen Daten herstellen. Wir stellen in dieser Dissertation vier miteinander verbundene Text- und Bildprobleme vor: • Bildersuche. Ob Bilder anhand von textbasierten Suchanfragen gefunden werden, hängt stark davon ab, ob der Text in der Nähe des Bildes mit dem der Anfrage übereinstimmt. Bilder ohne textuellen Kontext, oder sogar mit thematisch passendem Kontext, aber ohne direkte Übereinstimmungen der vorhandenen Schlagworte zur Suchanfrage, können häufig nicht gefunden werden. Zur Abhilfe schlagen wir vor, drei Arten von Informationen in Kombination zu nutzen: visuelle Informationen (in Form von automatisch generierten Bildbeschreibungen), textuelle Informationen (Stichworte aus vorangegangenen Suchanfragen), und Alltagswissen. • Verbesserte Bildbeschreibungen. Bei der Objekterkennung durch Computer Vision kommt es des Öfteren zu Fehldetektionen und Inkohärenzen. Die korrekte Identifikation von Bildinhalten ist jedoch eine wichtige Voraussetzung für die Suche nach Bildern mittels textueller Suchanfragen. Um die Fehleranfälligkeit bei der Objekterkennung zu minimieren, schlagen wir vor Alltagswissen einzubeziehen. Durch zusätzliche Bild-Annotationen, welche sich durch den gesunden Menschenverstand als thematisch passend erweisen, können viele fehlerhafte und zusammenhanglose Erkennungen vermieden werden. • Bild-Text Platzierung. Auf Internetseiten mit Text- und Bildinhalten (wie Nachrichtenseiten, Blogbeiträge, Artikel in sozialen Medien) werden Bilder in der Regel an semantisch sinnvollen Positionen im Textfluss platziert. Wir nutzen dies um ein Framework vorzuschlagen, in dem relevante Bilder ausgesucht werden und mit den passenden Abschnitten eines Textes assoziiert werden. • Bildunterschriften. Bilder, die als Teil von multimodalen Inhalten zur Verbesserung der Lesbarkeit von Texten dienen, haben typischerweise Bildunterschriften, die zum Kontext des umgebenden Texts passen. Wir schlagen vor, den Kontext beim automatischen Generieren von Bildunterschriften ebenfalls einzubeziehen. Üblicherweise werden hierfür die Bilder allein analysiert. Wir stellen die kontextbezogene Bildunterschriftengenerierung vor. Unsere vielversprechenden Beobachtungen und Ergebnisse eröffnen interessante Möglichkeiten für weitergehende Forschung zur computergestützten Erfassung des Zusammenspiels von Text- und Bildinhalten

    Development and Design of Deep Learning-based Parts-of-Speech Tagging System for Azerbaijani language

    Get PDF
    Parts-of-Speech (POS) tagging, also referred to as word-class disambiguation, is one of the prerequisite techniques that are used as part of the advanced pre-processing stage across pipeline at the majority of natural language processing (NLP) applications. By using this tool as a preliminary step, most NLP software, such as Chat Bots, Translating Engines, Voice Recognitions, etc., assigns a prior part of speech to each word in the given data in order to identify or distinguish the grammatical category, so they can easily decipher the meaning of the word. This thesis addresses the novel approach to the issue related to the clarification of word context for the Azerbaijani language by using a deep learning-based automatic speech tagger on a clean (manually annotated) dataset. Azerbaijani is a member of the Turkish family and an agglutinative language. In contrast to other languages, recent research studies of speech taggers for the Azerbaijani language were unable to deliver efficient state of the art accuracy. Thus, in this thesis, study is being conducted to investigate how deep learning strategies such as simple recurrent neural networks (RNN), long short-term memory (LSTM), bi-directional long short-term memory (Bi-LSTM), and gated recurrent unit (GRU) might be used to enhance the POS tagging capabilities of the Azerbaijani language

    The Future of Information Sciences : INFuture2009 : Digital Resources and Knowledge Sharing

    Get PDF

    Neural Machine Translation for Code Generation

    Full text link
    Neural machine translation (NMT) methods developed for natural language processing have been shown to be highly successful in automating translation from one natural language to another. Recently, these NMT methods have been adapted to the generation of program code. In NMT for code generation, the task is to generate output source code that satisfies constraints expressed in the input. In the literature, a variety of different input scenarios have been explored, including generating code based on natural language description, lower-level representations such as binary or assembly (neural decompilation), partial representations of source code (code completion and repair), and source code in another language (code translation). In this paper we survey the NMT for code generation literature, cataloging the variety of methods that have been explored according to input and output representations, model architectures, optimization techniques used, data sets, and evaluation methods. We discuss the limitations of existing methods and future research directionsComment: 33 pages, 1 figur
    corecore