95 research outputs found
CCG contextual labels in hierarchical phrase-based SMT
In this paper, we present a method to employ target-side syntactic contextual information in a Hierarchical Phrase-Based system. Our method uses Combinatory Categorial Grammar (CCG) to annotate training data with labels that represent the left and right syntactic context of target-side phrases. These labels are then used to assign labels to nonterminals in hierarchical rules. CCG-based contextual labels help
to produce more grammatical translations by forcing phrases which replace nonterminals during translations to comply with the contextual constraints imposed by the labels. We present experiments which examine the performance of CCG contextual labels on ChineseâEnglish and ArabicâEnglish translation in the news and speech expressions domains using different data sizes and CCG-labeling settings. Our experiments show that our CCG contextual labels-based system achieved a 2.42% relative BLEU improvement over a PhraseBased baseline on ArabicâEnglish translation and a 1% relative BLEU improvement over a Hierarchical Phrase-Based system baseline on ChineseâEnglish translation
Supertagged phrase-based statistical machine translation
Until quite recently, extending Phrase-based Statistical Machine Translation (PBSMT) with syntactic structure caused system performance to deteriorate. In this work we show that incorporating lexical syntactic descriptions in the form of supertags can yield significantly better PBSMT systems. We describe a novel PBSMT model that integrates
supertags into the target language model and the target side of the translation model. Two kinds of supertags are employed: those from Lexicalized Tree-Adjoining Grammar
and Combinatory Categorial Grammar. Despite the differences between these two approaches, the supertaggers give similar improvements. In addition to supertagging, we also explore the utility of a surface global grammaticality measure based on combinatory operators. We perform various experiments on the Arabic to English NIST 2005 test set addressing issues such as sparseness, scalability and the utility of system subcomponents. Our best result (0.4688 BLEU) improves by 6.1% relative to a state-of-theart
PBSMT model, which compares very favourably with the leading systems on the NIST 2005 task
A syntactified direct translation model with linear-time decoding
Recent syntactic extensions of statistical translation models work with a synchronous context-free or tree-substitution grammar extracted from an automatically parsed parallel corpus. The decoders accompanying these extensions typically exceed quadratic time complexity. This paper extends the Direct Translation Model 2 (DTM2) with syntax while maintaining linear-time decoding. We employ a linear-time parsing algorithm based on an eager, incremental interpretation of Combinatory Categorial Grammar
(CCG). As every input word is processed, the local parsing decisions resolve ambiguity eagerly, by selecting a single
supertagâoperator pair for extending the dependency parse incrementally. Alongside translation features extracted from
the derived parse tree, we explore syntactic features extracted from the incremental derivation process. Our empirical experiments show that our model significantly
outperforms the state-of-the art DTM2 system
Modelling source- and target-language syntactic Information as conditional context in interactive neural machine translation
In interactive machine translation (MT),
human translators correct errors in auto-
matic translations in collaboration with the
MT systems, which is seen as an effective
way to improve the productivity gain in
translation. In this study, we model source-
language syntactic constituency parse and
target-language syntactic descriptions in
the form of supertags as conditional con-
text for interactive prediction in neural
MT (NMT). We found that the supertags
significantly improve productivity gain in
translation in interactive-predictive NMT
(INMT), while syntactic parsing somewhat
found to be effective in reducing human
efforts in translation. Furthermore, when
we model this source- and target-language
syntactic information together as the con-
ditional context, both types complement
each other and our fully syntax-informed
INMT model shows statistically significant
reduction in human efforts for a Frenchâ
toâEnglish translation task in a reference-
simulated setting, achieving 4.30 points
absolute (corresponding to 9.18% relative)
improvement in terms of word prediction
accuracy (WPA) and 4.84 points absolute
(corresponding to 9.01% relative) reduc-
tion in terms of word stroke ratio (WSR)
over the baseline
Modeling Target-Side Inflection in Neural Machine Translation
NMT systems have problems with large vocabulary sizes. Byte-pair encoding
(BPE) is a popular approach to solving this problem, but while BPE allows the
system to generate any target-side word, it does not enable effective
generalization over the rich vocabulary in morphologically rich languages with
strong inflectional phenomena. We introduce a simple approach to overcome this
problem by training a system to produce the lemma of a word and its
morphologically rich POS tag, which is then followed by a deterministic
generation step. We apply this strategy for English-Czech and English-German
translation scenarios, obtaining improvements in both settings. We furthermore
show that the improvement is not due to only adding explicit morphological
information.Comment: Accepted as a research paper at WMT17. (Updated version with
corrected references.
MATREX: the DCU MT system for WMT 2010
This paper describes the DCU machine translation system in the evaluation campaign of the Joint Fifth Workshop on Statistical Machine Translation and Metrics in ACL-2010. We describe the modular design of our multi-engine machine translation (MT) system with particular focus on the components used in this participation.
We participated in the EnglishâSpanish and EnglishâCzech translation tasks, in which we employed our multiengine
architecture to translate. We also participated in the system combination task which was carried out by the MBR
decoder and confusion network decoder
Lexicalization and Grammar Development
In this paper we present a fully lexicalized grammar formalism as a
particularly attractive framework for the specification of natural language
grammars. We discuss in detail Feature-based, Lexicalized Tree Adjoining
Grammars (FB-LTAGs), a representative of the class of lexicalized grammars. We
illustrate the advantages of lexicalized grammars in various contexts of
natural language processing, ranging from wide-coverage grammar development to
parsing and machine translation. We also present a method for compact and
efficient representation of lexicalized trees.Comment: ps file. English w/ German abstract. 10 page
Predicting Target Language CCG Supertags Improves Neural Machine Translation
Neural machine translation (NMT) models are able to partially learn syntactic
information from sequential lexical information. Still, some complex syntactic
phenomena such as prepositional phrase attachment are poorly modeled. This work
aims to answer two questions: 1) Does explicitly modeling target language
syntax help NMT? 2) Is tight integration of words and syntax better than
multitask training? We introduce syntactic information in the form of CCG
supertags in the decoder, by interleaving the target supertags with the word
sequence. Our results on WMT data show that explicitly modeling target-syntax
improves machine translation quality for German->English, a high-resource pair,
and for Romanian->English, a low-resource pair and also several syntactic
phenomena including prepositional phrase attachment. Furthermore, a tight
coupling of words and syntax improves translation quality more than multitask
training. By combining target-syntax with adding source-side dependency labels
in the embedding layer, we obtain a total improvement of 0.9 BLEU for
German->English and 1.2 BLEU for Romanian->English.Comment: Accepted at the Second Conference on Machine Translation (WMT17).
This version includes more results regarding target syntax for
Romanian->English and reports fewer results regarding source synta
Supertagging with Factorial Hidden Markov Models
PACLIC 23 / City University of Hong Kong / 3-5 December 200
- âŚ