941 research outputs found
A Novel and Robust Approach for Pro-Drop Language Translation
A significant challenge for machine translation (MT) is the phenomena of dropped pronouns (DPs), where certain classes of pronouns are frequently dropped in the source language but should be retained in the target language. In response to this common problem, we propose a semi-supervised approach with a universal framework to recall missing pronouns in translation. Firstly, we build training data for DP generation in which the DPs are automatically labelled according to the alignment information from a parallel corpus. Secondly, we build a deep learning-based DP generator for input sentences in decoding when no corresponding references exist. More specifically, the generation has two phases: (1) DP position detection, which is modeled as a sequential labelling task with recurrent neural networks; and (2) DP prediction, which employs a multilayer perceptron with rich features. Finally, we integrate the above outputs into our statistical MT (SMT) system to recall missing pronouns by both extracting rules from the DP-labelled training data and translating the DP-generated input sentences. To validate the robustness of our approach, we investigate our approach on both Chinese–English and Japanese–English corpora extracted from movie subtitles. Compared with an SMT baseline system, experimental results show that our approach achieves a significant improvement of++1.58 BLEU points in translation performance with 66% F-score for DP generation accuracy for Chinese–English, and nearly++1 BLEU point with 58% F-score for Japanese–English. We believe that this work could help both MT researchers and industries to boost the performance of MT systems between pro-drop and non-pro-drop languages
Discourse Structure in Machine Translation Evaluation
In this article, we explore the potential of using sentence-level discourse
structure for machine translation evaluation. We first design discourse-aware
similarity measures, which use all-subtree kernels to compare discourse parse
trees in accordance with the Rhetorical Structure Theory (RST). Then, we show
that a simple linear combination with these measures can help improve various
existing machine translation evaluation metrics regarding correlation with
human judgments both at the segment- and at the system-level. This suggests
that discourse information is complementary to the information used by many of
the existing evaluation metrics, and thus it could be taken into account when
developing richer evaluation metrics, such as the WMT-14 winning combined
metric DiscoTKparty. We also provide a detailed analysis of the relevance of
various discourse elements and relations from the RST parse trees for machine
translation evaluation. In particular we show that: (i) all aspects of the RST
tree are relevant, (ii) nuclearity is more useful than relation type, and (iii)
the similarity of the translation RST tree to the reference tree is positively
correlated with translation quality.Comment: machine translation, machine translation evaluation, discourse
analysis. Computational Linguistics, 201
Type B Reflexivization as an Unambiguous Testbed for Multilingual Multi-Task Gender Bias
The one-sided focus on English in previous studies of gender bias in NLP
misses out on opportunities in other languages: English challenge datasets such
as GAP and WinoGender highlight model preferences that are "hallucinatory",
e.g., disambiguating gender-ambiguous occurrences of 'doctor' as male doctors.
We show that for languages with type B reflexivization, e.g., Swedish and
Russian, we can construct multi-task challenge datasets for detecting gender
bias that lead to unambiguously wrong model predictions: In these languages,
the direct translation of 'the doctor removed his mask' is not ambiguous
between a coreferential reading and a disjoint reading. Instead, the
coreferential reading requires a non-gendered pronoun, and the gendered,
possessive pronouns are anti-reflexive. We present a multilingual, multi-task
challenge dataset, which spans four languages and four NLP tasks and focuses
only on this phenomenon. We find evidence for gender bias across all
task-language combinations and correlate model bias with national labor market
statistics.Comment: To appear in EMNLP 202
- …