350 research outputs found

    Quantifying the Dialect Gap and its Correlates Across Languages

    Full text link
    Historically, researchers and consumers have noticed a decrease in quality when applying NLP tools to minority variants of languages (i.e. Puerto Rican Spanish or Swiss German), but studies exploring this have been limited to a select few languages. Additionally, past studies have mainly been conducted in a monolingual context, so cross-linguistic trends have not been identified and tied to external factors. In this work, we conduct a comprehensive evaluation of the most influential, state-of-the-art large language models (LLMs) across two high-use applications, machine translation and automatic speech recognition, to assess their functionality on the regional dialects of several high- and low-resource languages. Additionally, we analyze how the regional dialect gap is correlated with economic, social, and linguistic factors. The impact of training data, including related factors like dataset size and its construction procedure, is shown to be significant but not consistent across models or languages, meaning a one-size-fits-all approach cannot be taken in solving the dialect gap. This work will lay the foundation for furthering the field of dialectal NLP by laying out evident disparities and identifying possible pathways for addressing them through mindful data collection.Comment: Accepted to EMNLP Findings 202

    Quality Evaluation of C-E Translation of Legal Texts by Mainstream Machine Translation Systems—An Example of DeepL and Metasota

    Get PDF
    Despite significant progress made in machine translation technology and the ongoing efforts in practical and commercial application of neural machine translation systems, their performance in vertical fields remains unsatisfactory. To avoid misunderstandings and excessive expectations of a specific machine translation system, this research selected legal texts as its real data research object. The text translation tasks were accomplished using two popular neural machine translation systems, DeepL and Metasota, both domestically and internationally, and evaluated using internationally recognized BLEU algorithm to reflect their Chinese-to-English translation performance in legal fields. Based on the determined BLEU score, the study adopted an artificial analysis method to analyze the grammatical aspects of the machine translation output, including the accuracy of terminology usage, word order, subject-verb agreement, sentence structure, tense, and voice to enable readers to have a rational understanding of the gap between machine translation and human translation in legal text translation, and objectively assess the application and future development prospects of machine translation in legal text fields. The experimental results indicate that machine translation systems still face challenges in achieving high-quality legal text translations and meeting practical needs, and that further post-translation editing research is needed to improve the accuracy of legal text translation

    Arabic goal-oriented conversational agents using semantic similarity techniques

    Get PDF
    Conversational agents (CAs) are computer programs used to interact with humans in conversation. Goal-Oriented Conversational agents (GO-CAs) are programs that interact with humans to serve a specific domain of interest; its’ importance has increased recently and covered fields of technology, sciences and marketing. There are several types of CAs used in the industry, some of them are simple with limited usage, others are sophisticated. Generally, most CAs were to serve the English language speakers, a few were built for the Arabic language, this is due to the complexity of the Arabic language, lack of researchers in both linguistic and computing. This thesis covered two types of GO-CAs. The first is the traditional pattern matching goal oriented CA (PMGO-CA), and the other is the semantic goal oriented CA (SGO-CA). Pattern matching conversational agents (PMGO-CA) techniques are widely used in industry due to their flexibility and high performance. However, they are labour intensive, difficult to maintain or update, and need continuous housekeeping to manage users’ utterances (especially when instructions or knowledge changes). In addition to that they lack for any machine intelligence. Semantic conversational agents (SGO-CA) techniques utilises humanly constructed knowledge bases such as WordNet to measure word and sentence similarity. Such measurement witnessed many researches for the English language, and very little for the Arabic language. In this thesis, the researcher developed a novelty of a new methodology for the Arabic conversational agents (using both Pattern Matching and Semantic CAs), starting from scripting, knowledge engineering, architecture, implementation and evaluation. New tools to measure the word and sentence similarity were also constructed. To test performance of those CAs, a domain representing the Iraqi passport services was built. Both CAs were evaluated and tested by domain experts using special evaluation metrics. The evaluation showed very promising results, and the viability of the system for real life

    Resource Generation from Structured Documents for Low-density Languages

    Get PDF
    The availability and use of electronic resources for both manual and automated language related processing has increased tremendously in recent years. Nevertheless, many resources still exist only in printed form, restricting their availability and use. This especially holds true in low density languages or languages with limited electronic resources. For these documents, automated conversion into electronic resources is highly desirable. This thesis focuses on the semi-automated conversion of printed structured documents (dictionaries in particular) to usable electronic representations. In the first part we present an entry tagging system that recognizes, parses, and tags the entries of a printed dictionary to reproduce the representation. The system uses the consistent layout and structure of the dictionaries, and the features that impose this structure, to capture and recover lexicographic information. We accomplish this by adapting two methods: rule-based and HMM-based. The system is designed to produce results quickly with minimal human assistance and reasonable accuracy. The use of an adaptive transformation-based learning as a post-processor at two points in the system yields significant improvements, even with an extremely small amount of user provided training data. The second part of this thesis presents Morphology Induction from Noisy Data (MIND), a natural language morphology discovery framework that operates on information from limited, noisy data obtained from the conversion process. To use the resulting resources effectively, however, users must be able to search for them using the root form of morphologically deformed variant found in the text. Stemming and data driven methods are not suitable when data are sparse. The approach is based on the novel application of string searching algorithms. The evaluations show that MIND can segment words into roots and affixes from the noisy, limited data contained in a dictionary, and it can extract prefixes, suffixes, circumfixes, and infixes. MIND can also identify morphophonemic changes, i.e., phonemic variations between allomorphs of a morpheme, specifically point-of-affixation stem changes. This, in turn, allows non-native speakers to perform multilingual tasks for applications where response must be rapid, and they have limited knowledge. In addition, this analysis can feed other natural language processing tools requiring lexicons

    A linguistically motivated taxonomy for Machine Translation error analysis

    Get PDF
    UID/LIN/03213/2013 SFRH/BD/85737/2012 SFRH/BD/51157/2010 SFRH/BD/51156/2010A detailed error analysis is a fundamental step in every natural lan- guage processing task, as to be able to diagnosis what went wrong will provide cues to decide which are the research directions to be followed. In this paper we focus on error analysis in Machine Translation. We deeply extend previous error taxonomies so that translation errors associated with Romance languages speci- ficities can be accommodated. Also, based on the proposed taxonomy, we carry out an extensive analysis of the errors generated by four di↔erent systems: two mainstream online translation systems Google Translate (Statistical) and Systran (Hybrid Machine Translation) and two in-house Machine Translation systems, in three scenarios representing di↔erent challenges in the translation from English to European Portuguese. Additionally, we comment on how distinct error types di↔erently impact translation quality.publishersversionpublishe
    • 

    corecore