28,282 research outputs found
Non-linear Learning for Statistical Machine Translation
Modern statistical machine translation (SMT) systems usually use a linear
combination of features to model the quality of each translation hypothesis.
The linear combination assumes that all the features are in a linear
relationship and constrains that each feature interacts with the rest features
in an linear manner, which might limit the expressive power of the model and
lead to a under-fit model on the current data. In this paper, we propose a
non-linear modeling for the quality of translation hypotheses based on neural
networks, which allows more complex interaction between features. A learning
framework is presented for training the non-linear models. We also discuss
possible heuristics in designing the network structure which may improve the
non-linear learning performance. Experimental results show that with the basic
features of a hierarchical phrase-based machine translation system, our method
produce translations that are better than a linear model.Comment: submitted to a conferenc
Recommended from our members
Refinements in hierarchical phrase-based translation systems
The relatively recently proposed hierarchical phrase-based translation model
for statistical machine translation (SMT) has achieved state-of-the-art performance
in numerous recent translation evaluations. Hierarchical phrase-based
systems comprise a pipeline of modules with complex interactions. In
this thesis, we propose refinements to the hierarchical phrase-based model
as well as improvements and analyses in various modules for hierarchical
phrase-based systems.
We took the opportunity of increasing amounts of available training data
for machine translation as well as existing frameworks for distributed computing
in order to build better infrastructure for extraction, estimation and
retrieval of hierarchical phrase-based grammars. We design and implement
grammar extraction as a series of Hadoop MapReduce jobs. We store the resulting
grammar using the HFile format, which offers competitive trade-offs
in terms of efficiency and simplicity. We demonstrate improvements over two
alternative solutions used in machine translation.
The modular nature of the SMT pipeline, while allowing individual improvements,
has the disadvantage that errors committed by one module are
propagated to the next. This thesis alleviates this issue between the word
alignment module and the grammar extraction and estimation module by
considering richer statistics from word alignment models in extraction. We
use alignment link and alignment phrase pair posterior probabilities for grammar
extraction and estimation and demonstrate translation improvements in
Chinese to English translation.
This thesis also proposes refinements in grammar and language modelling
both in the context of domain adaptation and in the context of the interaction
between first-pass decoding and lattice rescoring. We analyse alternative
strategies for grammar and language model cross-domain adaptation. We
also study interactions between first-pass and second-pass language model in terms of size and n-gram order. Finally, we analyse two smoothing methods
for large 5-gram language model rescoring.
The last two chapters are devoted to the application of phrase-based
grammars to the string regeneration task, which we consider as a means to
study the fluency of machine translation output. We design and implement a
monolingual phrase-based decoder for string regeneration and achieve state-of-the-art
performance on this task. By applying our decoder to the output
of a hierarchical phrase-based translation system, we are able to recover the
same level of translation quality as the translation system
Hierarchical Phrase-Based Translation with Suffix Arrays
A major engineering challenge in statistical machine translation systems is the efficient representation of extremely large translation rulesets. In phrase-based models, this prob-lem can be addressed by storing the training data in memory and using a suffix array as an efficient index to quickly lookup and ex-tract rules on the fly. Hierarchical phrase-based translation introduces the added wrin-kle of source phrases with gaps. Lookup algorithms used for contiguous phrases no longer apply and the best approximate pat-tern matching algorithms are much too slow, taking several minutes per sentence. We describe new lookup algorithms for hierar-chical phrase-based translation that reduce the empirical computation time by nearly two orders of magnitude, making on-the-fly lookup feasible for source phrases with gaps.
Resourcing machine translation with parallel treebanks
The benefits of syntax-based approaches to data-driven machine translation (MT) are clear: given the right model, a combination of hierarchical structure, constituent labels and morphological information can be exploited to produce more fluent, grammatical translation output. This has been demonstrated by the recent shift in research focus towards such linguistically motivated approaches. However, one issue facing developers of such models that is not encountered in the development of state-of-the-art string-based statistical MT (SMT) systems is the lack of available syntactically annotated training data for many languages.
In this thesis, we propose a solution to the problem of limited resources for syntax-based MT by introducing a novel sub-sentential alignment algorithm for the induction of translational equivalence links between pairs of phrase structure trees. This algorithm, which operates on a language pair-independent basis, allows for the automatic generation of large-scale parallel treebanks which are useful not only for machine translation, but also across a variety of natural language processing tasks. We demonstrate the viability of our automatically generated parallel treebanks by means of a thorough evaluation process during which they are compared to a manually annotated gold standard parallel treebank both intrinsically and in an MT task.
Following this, we hypothesise that these parallel treebanks are not only useful in syntax-based MT, but also have the potential to be exploited in other paradigms of MT. To this end, we carry out a large number of experiments across a variety of data sets and language pairs, in which we exploit the information encoded within the parallel treebanks in various components of phrase-based statistical MT systems. We demonstrate that improvements in translation accuracy can be achieved by enhancing SMT phrase tables with linguistically motivated phrase pairs extracted from a parallel treebank, while showing that a number of other features in SMT can also be supplemented with varying degrees of effectiveness. Finally, we examine ways in which synchronous grammars extracted from parallel treebanks can improve the quality of translation output, focussing on real translation examples from a syntax-based MT system
Recommended from our members
Hybrid System Combination for Machine Translation: An Integration of Phrase-level and Sentences-level Combination Approaches
Given the wide range of successful statistical MT approaches that have emerged recently, it would be beneficial to take advantage of their individual strengths and avoid their individual weaknesses. Multi-Engine Machine Translation (MEMT) attempts to do so by either fusing the output of multiple translation engines or selecting the best translation among them, aiming to improve the overall translation quality. In this thesis, we propose to use the phrase or the sentence as our combination unit instead of the word; three new phrase-level models and one sentence-level model with novel features are proposed. This contrasts with the most popular system combination technique to date which relies on word-level confusion network decoding.
Among the three new phrase-level models, the first one utilizes source sentences and target translation hypotheses to learn hierarchical phrases -- phrases that contain subphrases (Chiang 2007). It then re-decodes the source sentences using the hierarchical phrases to combine the results of multiple MT systems. The other two models we propose view combination as a paraphrasing process and use paraphrasing rules. The paraphrasing rules are composed of either string-to-string paraphrases or hierarchical paraphrases, learned from monolingual word alignments between a selected best translation hypothesis and other hypotheses. Our experimental results show that all of the three phrase-level models give superior performance in BLEU compared with the best single translation engine. The two paraphrasing models outperform the re-decoding model and the confusion network baseline model.
The sentence-level model exploits more complex syntactic and semantic information than the phrase-level models. It uses consensus, argument alignment, a supertag-based structural language model and a syntactic error detector. We use our sentence-level model in two ways: the first selects a translated sentence from multiple MT systems as the best translation to serve as a backbone for paraphrasing process; the second makes the final decision among all fused translations generated by the phrase-level models and all translated sentences of multiple MT systems. We proposed two novel hybrid combination structures for the integration of phrase-level and sentence-level combination frameworks in order to utilize the advantages of both frameworks and provide a more diverse set of plausible fused translations to consider
Pushdown automata in statistical machine translation
This article describes the use of pushdown automata (PDA) in the context of statistical machine translation and alignment under a synchronous context-free grammar. We use PDAs to compactly represent the space of candidate translations generated by the grammar when applied to an input sentence. General-purpose PDA algorithms for replacement, composition, shortest path, and expansion are presented. We describe HiPDT, a hierarchical phrase-based decoder using the PDA representation and these algorithms. We contrast the complexity of this decoder with a decoder based on a finite state automata representation, showing that PDAs provide a more suitable framework to achieve exact decoding for larger synchronous context-free grammars and smaller language models. We assess this experimentally on a large-scale Chinese-to-English alignment and translation task. In translation, we propose a two-pass decoding strategy involving a weaker language model in the first-pass to address the results of PDA complexity analysis. We study in depth the experimental conditions and tradeoffs in which HiPDT can achieve state-of-the-art performance for large-scale SMT. </jats:p
- …