9 research outputs found
An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities
We describe an extension of Earley's parser for stochastic context-free
grammars that computes the following quantities given a stochastic context-free
grammar and an input string: a) probabilities of successive prefixes being
generated by the grammar; b) probabilities of substrings being generated by the
nonterminals, including the entire string being generated by the grammar; c)
most likely (Viterbi) parse of the string; d) posterior expected number of
applications of each grammar production, as required for reestimating rule
probabilities. (a) and (b) are computed incrementally in a single left-to-right
pass over the input. Our algorithm compares favorably to standard bottom-up
parsing methods for SCFGs in that it works efficiently on sparse grammars by
making use of Earley's top-down control structure. It can process any
context-free rule format without conversion to some normal form, and combines
computations for (a) through (d) in a single algorithm. Finally, the algorithm
has simple extensions for processing partially bracketed inputs, and for
finding partial parses and their likelihoods on ungrammatical inputs.Comment: 45 pages. Slightly shortened version to appear in Computational
Linguistics 2
Edinburgh's Statistical Machine Translation Systems for WMT16
This paper describes the University of Edinburgh’s
phrase-based and syntax-based
submissions to the shared translation tasks
of the ACL 2016 First Conference on Machine
Translation (WMT16). We submitted
five phrase-based and five syntaxbased
systems for the news task, plus one
phrase-based system for the biomedical
task
Reordering in statistical machine translation: A function word, syntax-based approach
Ph.DDOCTOR OF PHILOSOPH
Syntax-based machine translation using dependency grammars and discriminative machine learning
Machine translation underwent huge improvements since the groundbreaking
introduction of statistical methods in the early 2000s, going from very
domain-specific systems that still performed relatively poorly despite the
painstakingly crafting of thousands of ad-hoc rules, to general-purpose
systems automatically trained on large collections of bilingual texts which
manage to deliver understandable translations that convey the general
meaning of the original input.
These approaches however still perform quite below the level of human
translators, typically failing to convey detailed meaning and register, and
producing translations that, while readable, are often ungrammatical and
unidiomatic.
This quality gap, which is considerably large compared to most other
natural language processing tasks, has been the focus of the research in
recent years, with the development of increasingly sophisticated models that
attempt to exploit the syntactical structure of human languages, leveraging
the technology of statistical parsers, as well as advanced machine learning
methods such as marging-based structured prediction algorithms and neural
networks.
The translation software itself became more complex in order to accommodate
for the sophistication of these advanced models: the main translation
engine (the decoder) is now often combined with a pre-processor which
reorders the words of the source sentences to a target language word order, or
with a post-processor that ranks and selects a translation according according
to fine model from a list of candidate translations generated by a coarse
model.
In this thesis we investigate the statistical machine translation problem
from various angles, focusing on translation from non-analytic languages
whose syntax is best described by fluid non-projective dependency grammars
rather than the relatively strict phrase-structure grammars or projectivedependency
grammars which are most commonly used in the literature.
We propose a framework for modeling word reordering phenomena
between language pairs as transitions on non-projective source dependency
parse graphs. We quantitatively characterize reordering phenomena for the
German-to-English language pair as captured by this framework, specifically
investigating the incidence and effects of the non-projectivity of source
syntax and the non-locality of word movement w.r.t. the graph structure.
We evaluated several variants of hand-coded pre-ordering rules in order to
assess the impact of these phenomena on translation quality.
We propose a class of dependency-based source pre-ordering approaches
that reorder sentences based on a flexible models trained by SVMs and and
several recurrent neural network architectures.
We also propose a class of translation reranking models, both syntax-free
and source dependency-based, which make use of a type of neural networks
known as graph echo state networks which is highly flexible and requires
extremely little training resources, overcoming one of the main limitations
of neural network models for natural language processing tasks
Preference Learning for Machine Translation
Automatic translation of natural language is still (as of 2017) a long-standing but unmet promise. While advancing at a fast rate, the underlying methods are still far from actually being able to reliably capture syntax or semantics of arbitrary utterances of natural language, way off transporting the encoded meaning into a second language. However, it is possible to build useful translating machines when the target domain is well known and the machine is able to learn and adapt efficiently and promptly from new inputs. This is possible thanks to efficient and effective machine learning methods which can be applied to automatic translation.
In this work we present and evaluate methods for three distinct scenarios:
a) We develop algorithms that can learn from very large amounts of data by exploiting pairwise preferences defined over competing translations, which can be used to make a machine translation system robust to arbitrary texts from varied sources, but also enable it to learn effectively to adapt to new domains of data;
b) We describe a method that is able to efficiently learn external models which adhere to fine-grained preferences that are extracted from a constricted selection of translated material, e.g. for adapting to users or groups of users in a computer-aided translation scenario;
c) We develop methods for two machine translation paradigms, neural- and traditional statistical machine translation, to directly adapt to user-defined preferences in an interactive post-editing scenario, learning precisely adapted machine translation systems.
In all of these settings, we show that machine translation can be made significantly more useful by careful optimization via preference learning
Recommended from our members
Refinements in hierarchical phrase-based translation systems
The relatively recently proposed hierarchical phrase-based translation model
for statistical machine translation (SMT) has achieved state-of-the-art performance
in numerous recent translation evaluations. Hierarchical phrase-based
systems comprise a pipeline of modules with complex interactions. In
this thesis, we propose refinements to the hierarchical phrase-based model
as well as improvements and analyses in various modules for hierarchical
phrase-based systems.
We took the opportunity of increasing amounts of available training data
for machine translation as well as existing frameworks for distributed computing
in order to build better infrastructure for extraction, estimation and
retrieval of hierarchical phrase-based grammars. We design and implement
grammar extraction as a series of Hadoop MapReduce jobs. We store the resulting
grammar using the HFile format, which offers competitive trade-offs
in terms of efficiency and simplicity. We demonstrate improvements over two
alternative solutions used in machine translation.
The modular nature of the SMT pipeline, while allowing individual improvements,
has the disadvantage that errors committed by one module are
propagated to the next. This thesis alleviates this issue between the word
alignment module and the grammar extraction and estimation module by
considering richer statistics from word alignment models in extraction. We
use alignment link and alignment phrase pair posterior probabilities for grammar
extraction and estimation and demonstrate translation improvements in
Chinese to English translation.
This thesis also proposes refinements in grammar and language modelling
both in the context of domain adaptation and in the context of the interaction
between first-pass decoding and lattice rescoring. We analyse alternative
strategies for grammar and language model cross-domain adaptation. We
also study interactions between first-pass and second-pass language model in terms of size and n-gram order. Finally, we analyse two smoothing methods
for large 5-gram language model rescoring.
The last two chapters are devoted to the application of phrase-based
grammars to the string regeneration task, which we consider as a means to
study the fluency of machine translation output. We design and implement a
monolingual phrase-based decoder for string regeneration and achieve state-of-the-art
performance on this task. By applying our decoder to the output
of a hierarchical phrase-based translation system, we are able to recover the
same level of translation quality as the translation system