13 research outputs found
Survey on Publicly Available Sinhala Natural Language Processing Tools and Research
Sinhala is the native language of the Sinhalese people who make up the
largest ethnic group of Sri Lanka. The language belongs to the globe-spanning
language tree, Indo-European. However, due to poverty in both linguistic and
economic capital, Sinhala, in the perspective of Natural Language Processing
tools and research, remains a resource-poor language which has neither the
economic drive its cousin English has nor the sheer push of the law of numbers
a language such as Chinese has. A number of research groups from Sri Lanka have
noticed this dearth and the resultant dire need for proper tools and research
for Sinhala natural language processing. However, due to various reasons, these
attempts seem to lack coordination and awareness of each other. The objective
of this paper is to fill that gap of a comprehensive literature survey of the
publicly available Sinhala natural language tools and research so that the
researchers working in this field can better utilize contributions of their
peers. As such, we shall be uploading this paper to arXiv and perpetually
update it periodically to reflect the advances made in the field
Cross-Platform Text Mining and Natural Language Processing Interoperability - Proceedings of the LREC2016 conference
No abstract available
Cross-Platform Text Mining and Natural Language Processing Interoperability - Proceedings of the LREC2016 conference
No abstract available
Translationese indicators for human translation quality estimation (based on English-to-Russian translation of mass-media texts)
A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Human translation quality estimation is a relatively new and challenging area of research,
because human translation quality is notoriously more subtle and subjective than machine
translation, which attracts much more attention and effort of the research community. At
the same time, human translation is routinely assessed by education and certification institutions,
as well as at translation competitions. Do the quality labels and scores generated
from real-life quality judgments align well with objective properties of translations? This
thesis puts this question to a test using machine learning methods.
Conceptually, this research is built around a hypothesis that linguistic properties characteristic
of translations, as a specific form of communication, can correlate with translation
quality. This assumption is often made in translation studies but has never been put to
a rigorous empirical test. Exploring translationese features in a quality estimation task
can help identify quality-related trends in translational behaviour and provide data-driven
insights into professionalism to improve training. Using translationese for quality estimation
fits well with the concept of quality in translation studies, because it is essentially a
document-level property. Linguistically-motivated translationese features are also more interpretable
than popular distributed representations and can explain linguistic differences
between quality categories in human translation.
We investigated (i) an extended set of Universal Dependencies-based morphosyntactic
features as well as two lexical feature sets capturing (ii) collocational properties of translations,
and (iii) ratios of vocabulary items in various frequency bands along with entropy
scores from n-gram models. To compare the performance of our feature sets in translationese
classifications and in quality estimation tasks against other representations, the
experiments were also run on tf-idf features, QuEst++ features and on contextualised
embeddings from a range of pre-trained language models, including the state-of-the-art
multilingual solution for machine translation quality estimation. Our major focus was on
document-level prediction, however, where the labels and features allowed, the experiments
were extended to the sentence level.
The corpus used in this research includes English-to-Russian parallel subcorpora of student
and professional translations of mass-media texts, and a register-comparable corpus of
non-translations in the target language. Quality labels for various subsets of student translations
come from a number of real-life settings: translation competitions, graded student
translations, error annotations and direct assessment. We overview approaches to benchmarking
quality in translation and provide a detailed description of our own annotation
experiments.
Of the three proposed translationese feature sets, morphosyntactic features, returned
the best results on all tasks. In many settings they were secondary only to contextualised
embeddings. At the same time, performance on various representations was contingent
on the type of quality captured by quality labels/scores. Using the outcomes of machine
learning experiments and feature analysis, we established that translationese properties of
translations were not equality reflected by various labels and scores. For example, professionalism
was much less related to translationese than expected. Labels from documentlevel
holistic assessment demonstrated maximum support for our hypothesis: lower-ranking
translations clearly exhibited more translationese. They bore more traces of mechanical
translational behaviours associated with following source language patterns whenever possible,
which led to the inflated frequencies of analytical passives, modal predicates, verbal
forms, especially copula verbs and verbs in the finite form. As expected, lower-ranking
translations were more repetitive and had longer, more complex sentences. Higher-ranking
translations were indicative of greater skill in recognising and counteracting translationese
tendencies. For document-level holistic labels as an approach to capture quality, translationese
indicators might provide a valuable contribution to an effective quality estimation
pipeline.
However, error-based scores, and especially scores from sentence-level direct assessment,
proved to be much less correlated by translationese and fluency issues, in general. This was
confirmed by relatively low regression results across all representations that had access only
to the target language side of the dataset, by feature analysis and by correlation between
error-based scores and scores from direct assessment
Comprehensive Part-Of-Speech Tag Set and SVM Based POS Tagger for Sinhala
This paper presents a new comprehensive multi-level Part-Of-Speech tag set and a Support Vector Machine based Part-Of-Speech tagger for the Sinhala language. The currently available tag set for Sinhala has two limitations: the unavailability of tags to represent some word classes and the lack of tags to capture inflection based grammatical variations of words. The new tag set, presented in this paper overcomes both of these limitations. The accuracy of available Sinhala Part-Of-Speech taggers, which are based on Hidden Markov Models, still falls far behind state of the art. Our Support Vector Machine based tagger achieved an overall accuracy of 84.68% with 59.86% accuracy for unknown words and 87.12% for known words, when the test set contains 10% of unknown words
Natural Language Processing: Emerging Neural Approaches and Applications
This Special Issue highlights the most recent research being carried out in the NLP field to discuss relative open issues, with a particular focus on both emerging approaches for language learning, understanding, production, and grounding interactively or autonomously from data in cognitive and neural systems, as well as on their potential or real applications in different domains
Low-Resource Unsupervised NMT:Diagnosing the Problem and Providing a Linguistically Motivated Solution
Unsupervised Machine Translation hasbeen advancing our ability to translatewithout parallel data, but state-of-the-artmethods assume an abundance of mono-lingual data. This paper investigates thescenario where monolingual data is lim-ited as well, finding that current unsuper-vised methods suffer in performance un-der this stricter setting. We find that theperformance loss originates from the poorquality of the pretrained monolingual em-beddings, and we propose using linguis-tic information in the embedding train-ing scheme. To support this, we look attwo linguistic features that may help im-prove alignment quality: dependency in-formation and sub-word information. Us-ing dependency-based embeddings resultsin a complementary word representationwhich offers a boost in performance ofaround 1.5 BLEU points compared to stan-dardWORD2VECwhen monolingual datais limited to 1 million sentences per lan-guage. We also find that the inclusion ofsub-word information is crucial to improv-ing the quality of the embedding