1,204 research outputs found

    Modeling Target-Side Inflection in Neural Machine Translation

    Full text link
    NMT systems have problems with large vocabulary sizes. Byte-pair encoding (BPE) is a popular approach to solving this problem, but while BPE allows the system to generate any target-side word, it does not enable effective generalization over the rich vocabulary in morphologically rich languages with strong inflectional phenomena. We introduce a simple approach to overcome this problem by training a system to produce the lemma of a word and its morphologically rich POS tag, which is then followed by a deterministic generation step. We apply this strategy for English-Czech and English-German translation scenarios, obtaining improvements in both settings. We furthermore show that the improvement is not due to only adding explicit morphological information.Comment: Accepted as a research paper at WMT17. (Updated version with corrected references.

    End-to-end named entity recognition for spoken Finnish

    Get PDF
    Named entity recognition is a natural language processing task in which the system tries to find named entities and classify them in predefined categories. The categories can vary, depending on the domain in which they are going to be used but some of the most common include: person, location, organization, date and product. Named entity recognition is an integral part of other large natural language processing tasks, such as information retrieval, text summarization, machine translation, and question answering. Doing named entity recognition is a difficult task due to the lack of annotated data for certain languages or domains. Named entity ambiguity is another challenging aspect that arises when doing named entity recognition. Often times, a word can represent a person, organization, product, or any other category, depending on the context it appears in. Spoken data, which can be the output of a speech recognition system, imposes additional challenges to the named entity recognition system. Named entities are often capitalized and the system learns to rely on capitalization in order to detect the entities, which is neglected in the speech recognition output. The standard way of doing named entity recognition from speech involves a pipeline approach of two systems. First, a speech recognition system transcribes the speech and generates the transcripts, after which a named entity recognition system annotates the transcripts with the named entities. Since the speech recognition system is not perfect and makes errors, those errors are propagated to the named entity recognition system, which is hard to recover from. In this thesis, we present two approaches of doing named entity recognition from Finnish speech in an end-to-and manner, where one system generates the transcripts and the annotations. We will explore the strengths and weaknesses of both approaches and see how they compare to the standard pipeline approach

    Learning-to-Rank Meets Language: Boosting Language-Driven Ordering Alignment for Ordinal Classification

    Full text link
    We present a novel language-driven ordering alignment method for ordinal classification. The labels in ordinal classification contain additional ordering relations, making them prone to overfitting when relying solely on training data. Recent developments in pre-trained vision-language models inspire us to leverage the rich ordinal priors in human language by converting the original task into a visionlanguage alignment task. Consequently, we propose L2RCLIP, which fully utilizes the language priors from two perspectives. First, we introduce a complementary prompt tuning technique called RankFormer, designed to enhance the ordering relation of original rank prompts. It employs token-level attention with residual-style prompt blending in the word embedding space. Second, to further incorporate language priors, we revisit the approximate bound optimization of vanilla cross-entropy loss and restructure it within the cross-modal embedding space. Consequently, we propose a cross-modal ordinal pairwise loss to refine the CLIP feature space, where texts and images maintain both semantic alignment and ordering alignment. Extensive experiments on three ordinal classification tasks, including facial age estimation, historical color image (HCI) classification, and aesthetic assessment demonstrate its promising performance. The code is available at https://github.com/raywang335/L2RCLIP.Comment: Accepted by NeurIPS 202

    Morphological awareness in readers of IsiXhosa

    Get PDF
    This study focuses particularly on the development of four Morphological Awareness reading tests in isiXhosa and on the relationship of Morphological Awareness to reading success among 74 Grade 3 isiXhosa-speaking foundation-phase learners from three peri-urban schools. It explores in-depth why not all previously established Morphological Awareness tests for other languages suit the morphology of isiXhosa and how these tests have been revised in order to do so. Conventionally, the focus of Morphological Awareness literature has been on derivational morphology and reading comprehension. This study did not find significant correlations with comprehension, but rather with the children's ability to decode. Fluency and Morphological Awareness have not been given as much attention in the literature, but Morphological Awareness could be important for processing the agglutinating structure of the language in reading. This study also argues that it is not a specific awareness of derivational morphology over inflectional morphology, but rather a general awareness of one's language structure that is more important at this stage in their literacy development; specifically a general awareness of prefixes and suffixes. In addition, it was found that an explicit awareness of the morphological structure of the language related more to fluency and tests that accessed an innate and implicit Morphological Awareness had the strongest correlations overall with comprehension. The findings from this report have implications regarding how future curriculum developments for morphologically rich languages like isiXhosa should be approached. The positive and practical implications of including different types of Morphological Awareness tutoring in curricula is argued for, especially when teaching younger readers how to approach morphologically complex words in texts
    • …
    corecore