350 research outputs found
Quantifying the Dialect Gap and its Correlates Across Languages
Historically, researchers and consumers have noticed a decrease in quality
when applying NLP tools to minority variants of languages (i.e. Puerto Rican
Spanish or Swiss German), but studies exploring this have been limited to a
select few languages. Additionally, past studies have mainly been conducted in
a monolingual context, so cross-linguistic trends have not been identified and
tied to external factors. In this work, we conduct a comprehensive evaluation
of the most influential, state-of-the-art large language models (LLMs) across
two high-use applications, machine translation and automatic speech
recognition, to assess their functionality on the regional dialects of several
high- and low-resource languages. Additionally, we analyze how the regional
dialect gap is correlated with economic, social, and linguistic factors. The
impact of training data, including related factors like dataset size and its
construction procedure, is shown to be significant but not consistent across
models or languages, meaning a one-size-fits-all approach cannot be taken in
solving the dialect gap. This work will lay the foundation for furthering the
field of dialectal NLP by laying out evident disparities and identifying
possible pathways for addressing them through mindful data collection.Comment: Accepted to EMNLP Findings 202
Quality Evaluation of C-E Translation of Legal Texts by Mainstream Machine Translation SystemsâAn Example of DeepL and Metasota
Despite significant progress made in machine translation technology and the ongoing efforts in practical and commercial application of neural machine translation systems, their performance in vertical fields remains unsatisfactory. To avoid misunderstandings and excessive expectations of a specific machine translation system, this research selected legal texts as its real data research object. The text translation tasks were accomplished using two popular neural machine translation systems, DeepL and Metasota, both domestically and internationally, and evaluated using internationally recognized BLEU algorithm to reflect their Chinese-to-English translation performance in legal fields. Based on the determined BLEU score, the study adopted an artificial analysis method to analyze the grammatical aspects of the machine translation output, including the accuracy of terminology usage, word order, subject-verb agreement, sentence structure, tense, and voice to enable readers to have a rational understanding of the gap between machine translation and human translation in legal text translation, and objectively assess the application and future development prospects of machine translation in legal text fields. The experimental results indicate that machine translation systems still face challenges in achieving high-quality legal text translations and meeting practical needs, and that further post-translation editing research is needed to improve the accuracy of legal text translation
Arabic goal-oriented conversational agents using semantic similarity techniques
Conversational agents (CAs) are computer programs used to interact with humans in conversation. Goal-Oriented Conversational agents (GO-CAs) are programs that interact with humans to serve a specific domain of interest; itsâ importance has increased recently and covered fields of technology, sciences and marketing. There are several types of CAs used in the industry, some of them are simple with limited usage, others are sophisticated. Generally, most CAs were to serve the English language speakers, a few were built for the Arabic language, this is due to the complexity of the Arabic language, lack of researchers in both linguistic and computing. This thesis covered two types of GO-CAs. The first is the traditional pattern matching goal oriented CA (PMGO-CA), and the other is the semantic goal oriented CA (SGO-CA).
Pattern matching conversational agents (PMGO-CA) techniques are widely used in industry due to their flexibility and high performance. However, they are labour intensive, difficult to maintain or update, and need continuous housekeeping to manage usersâ utterances (especially when instructions or knowledge changes). In addition to that they lack for any machine intelligence.
Semantic conversational agents (SGO-CA) techniques utilises humanly constructed knowledge bases such as WordNet to measure word and sentence similarity. Such measurement witnessed many researches for the English language, and very little for the Arabic language.
In this thesis, the researcher developed a novelty of a new methodology for the Arabic conversational agents (using both Pattern Matching and Semantic CAs), starting from scripting, knowledge engineering, architecture, implementation and evaluation. New tools to measure the word and sentence similarity were also constructed. To test performance of those CAs, a domain representing the Iraqi passport services was built. Both CAs were evaluated and tested by domain experts using special evaluation metrics. The evaluation showed very promising results, and the viability of the system for real life
Resource Generation from Structured Documents for Low-density Languages
The availability and use of electronic resources for both manual and automated language related processing has increased tremendously in recent years. Nevertheless, many resources still exist only in printed form, restricting their availability and use. This especially holds true in low density languages or languages with limited electronic resources. For these documents, automated conversion into electronic resources is highly desirable.
This thesis focuses on the semi-automated conversion of printed structured documents (dictionaries in particular) to usable electronic representations. In the first part we present an entry tagging system that recognizes, parses, and tags the entries of a printed dictionary to reproduce the representation. The system uses the consistent layout and structure of the dictionaries, and the features that impose this structure, to capture and recover lexicographic information. We accomplish this by adapting two methods: rule-based and HMM-based. The system is designed to produce results quickly with minimal human assistance and reasonable accuracy. The use of an adaptive transformation-based learning as a post-processor at two points in the system yields significant improvements, even with an extremely small amount of user provided training data.
The second part of this thesis presents Morphology Induction from Noisy Data (MIND), a natural language morphology discovery framework that operates on information from limited, noisy data obtained from the conversion process. To use the resulting resources effectively, however, users must be able to search for them using the root form of morphologically deformed variant found in the text. Stemming and data driven methods are not suitable when data are sparse. The approach is based on the novel application of string searching algorithms. The evaluations show that MIND can segment words into roots and affixes from the noisy, limited data contained in a dictionary, and it can extract prefixes, suffixes, circumfixes, and infixes. MIND can also identify morphophonemic changes, i.e., phonemic variations between allomorphs of a morpheme, specifically point-of-affixation stem changes. This, in turn, allows non-native speakers to perform multilingual tasks for applications where response must be rapid, and they have limited knowledge. In addition, this analysis can feed other natural language processing tools requiring lexicons
Recommended from our members
Machine Translation of Arabic Dialects
This thesis discusses different approaches to machine translation (MT) from Dialectal Arabic (DA) to English. These approaches handle the varying stages of Arabic dialects in terms of types of available resources and amounts of training data. The overall theme of this work revolves around building dialectal resources and MT systems or enriching existing ones using the currently available resources (dialectal or standard) in order to quickly and cheaply scale to more dialects without the need to spend years and millions of dollars to create such resources for every dialect.
Unlike Modern Standard Arabic (MSA), DA-English parallel corpora is scarcely available for few dialects only. Dialects differ from each other and from MSA in orthography, morphology, phonology, and to some lesser degree syntax. This means that combining all available parallel data, from dialects and MSA, to train DA-to-English statistical machine translation (SMT) systems might not provide the desired results. Similarly, translating dialectal sentences with an SMT system trained on that dialect only is also challenging due to different factors that affect the sentence word choices against that of the SMT training data. Such factors include the level of dialectness (e.g., code switching to MSA versus dialectal training data), topic (sports versus politics), genre (tweets versus newspaper), script (Arabizi versus Arabic), and timespan of test against training. The work we present utilizes any available Arabic resource such as a preprocessing tool or a parallel corpus, whether MSA or DA, to improve DA-to-English translation and expand to more dialects and sub-dialects.
The majority of Arabic dialects have no parallel data to English or to any other foreign language. They also have no preprocessing tools such as normalizers, morphological analyzers, or tokenizers. For such dialects, we present an MSA-pivoting approach where DA sentences are translated to MSA first, then the MSA output is translated to English using the wealth of MSA-English parallel data. Since there is virtually no DA-MSA parallel data to train an SMT system, we build a rule-based DA-to-MSA MT system, ELISSA, that uses morpho-syntactic translation rules along with dialect identification and language modeling components. We also present a rule-based approach to quickly and cheaply build a dialectal morphological analyzer, ADAM, which provides ELISSA with dialectal word analyses.
Other Arabic dialects have a relatively small-sized DA-English parallel data amounting to a few million words on the DA side. Some of these dialects have dialect-dependent preprocessing tools that can be used to prepare the DA data for SMT systems. We present techniques to generate synthetic parallel data from the available DA-English and MSA- English data. We use this synthetic data to build statistical and hybrid versions of ELISSA as well as improve our rule-based ELISSA-based MSA-pivoting approach. We evaluate our best MSA-pivoting MT pipeline against three direct SMT baselines trained on these three parallel corpora: DA-English data only, MSA-English data only, and the combination of DA-English and MSA-English data. Furthermore, we leverage the use of these four MT systems (the three baselines along with our MSA-pivoting system) in two system combination approaches that benefit from their strengths while avoiding their weaknesses.
Finally, we propose an approach to model dialects from monolingual data and limited DA-English parallel data without the need for any language-dependent preprocessing tools. We learn DA preprocessing rules using word embedding and expectation maximization. We test this approach by building a morphological segmentation system and we evaluate its performance on MT against the state-of-the-art dialectal tokenization tool
A linguistically motivated taxonomy for Machine Translation error analysis
UID/LIN/03213/2013
SFRH/BD/85737/2012
SFRH/BD/51157/2010
SFRH/BD/51156/2010A detailed error analysis is a fundamental step in every natural lan- guage processing task, as to be able to diagnosis what went wrong will provide cues to decide which are the research directions to be followed. In this paper we focus on error analysis in Machine Translation. We deeply extend previous error taxonomies so that translation errors associated with Romance languages speci- ficities can be accommodated. Also, based on the proposed taxonomy, we carry out an extensive analysis of the errors generated by four diâ”erent systems: two mainstream online translation systems Google Translate (Statistical) and Systran (Hybrid Machine Translation) and two in-house Machine Translation systems, in three scenarios representing diâ”erent challenges in the translation from English to European Portuguese. Additionally, we comment on how distinct error types diâ”erently impact translation quality.publishersversionpublishe
- âŠ