3,276 research outputs found
Spoken Language Translation Graphs Re-decoding using Automatic Quality Assessment
International audienceThis paper investigates how automatic quality assessment of spoken language translation (SLT) can help re-decoding SLT output graphs and improving the overall speech translation performance. Using robust word confidence measures (from both ASR and MT) to re-decode the SLT graph leads to a significant BLEU improvement (more than 2 points) compared to our SLT baseline (French-English task)
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
Leveraging study of robustness and portability of spoken language understanding systems across languages and domains: the PORTMEDIA corpora
International audienceThe PORTMEDIA project is intended to develop new corpora for the evaluation of spoken language understanding systems. The newly collected data are in the field of human-machine dialogue systems for tourist information in French in line with the MEDIA corpus. Transcriptions and semantic annotations, obtained by low-cost procedures, are provided to allow a thorough evaluation of the systems' capabilities in terms of robustness and portability across languages and domains. A new test set with some adaptation data is prepared for each case: in Italian as an example of a new language, for ticket reservation as an example of a new domain. Finally the work is complemented by the proposition of a new high level semantic annotation scheme well-suited to dialogue data
Reordering in statistical machine translation
PhDMachine translation is a challenging task that its difficulties arise from several characteristics
of natural language. The main focus of this work is on reordering as one of
the major problems in MT and statistical MT, which is the method investigated in this
research. The reordering problem in SMT originates from the fact that not all the words
in a sentence can be consecutively translated. This means words must be skipped and
be translated out of their order in the source sentence to produce a fluent and grammatically
correct sentence in the target language. The main reason that reordering is
needed is the fundamental word order differences between languages. Therefore, reordering
becomes a more dominant issue, the more source and target languages are
structurally different.
The aim of this thesis is to study the reordering phenomenon by proposing new methods
of dealing with reordering in SMT decoders and evaluating the effectiveness of
the methods and the importance of reordering in the context of natural language processing
tasks. In other words, we propose novel ways of performing the decoding to
improve the reordering capabilities of the SMT decoder and in addition we explore
the effect of improving the reordering on the quality of specific NLP tasks, namely
named entity recognition and cross-lingual text association. Meanwhile, we go beyond
reordering in text association and present a method to perform cross-lingual text fragment
alignment, based on models of divergence from randomness.
The main contribution of this thesis is a novel method named dynamic distortion,
which is designed to improve the ability of the phrase-based decoder in performing
reordering by adjusting the distortion parameter based on the translation context. The
model employs a discriminative reordering model, which is combining several fea-
2
tures including lexical and syntactic, to predict the necessary distortion limit for each
sentence and each hypothesis expansion. The discriminative reordering model is also
integrated into the decoder as an extra feature. The method achieves substantial improvements
over the baseline without increase in the decoding time by avoiding reordering
in unnecessary positions.
Another novel method is also presented to extend the phrase-based decoder to dynamically
chunk, reorder, and apply phrase translations in tandem. Words inside the chunks
are moved together to enable the decoder to make long-distance reorderings to capture
the word order differences between languages with different sentence structures.
Another aspect of this work is the task-based evaluation of the reordering methods and
other translation algorithms used in the phrase-based SMT systems. With more successful
SMT systems, performing multi-lingual and cross-lingual tasks through translating
becomes more feasible. We have devised a method to evaluate the performance
of state-of-the art named entity recognisers on the text translated by a SMT decoder.
Specifically, we investigated the effect of word reordering and incorporating reordering
models in improving the quality of named entity extraction.
In addition to empirically investigating the effect of translation in the context of crosslingual
document association, we have described a text fragment alignment algorithm
to find sections of the two documents in different languages, that are content-wise related.
The algorithm uses similarity measures based on divergence from randomness
and word-based translation models to perform text fragment alignment on a collection
of documents in two different languages.
All the methods proposed in this thesis are extensively empirically examined. We have
tested all the algorithms on common translation collections used in different evaluation
campaigns. Well known automatic evaluation metrics are used to compare the
suggested methods to a state-of-the art baseline and results are analysed and discussed
On the effective deployment of current machine translation technology
Machine translation is a fundamental technology that is gaining more importance
each day in our multilingual society. Companies and particulars are
turning their attention to machine translation since it dramatically cuts down
their expenses on translation and interpreting. However, the output of current
machine translation systems is still far from the quality of translations generated
by human experts. The overall goal of this thesis is to narrow down
this quality gap by developing new methodologies and tools that improve the
broader and more efficient deployment of machine translation technology.
We start by proposing a new technique to improve the quality of the
translations generated by fully-automatic machine translation systems. The
key insight of our approach is that different translation systems, implementing
different approaches and technologies, can exhibit different strengths and
limitations. Therefore, a proper combination of the outputs of such different
systems has the potential to produce translations of improved quality.
We present minimum Bayes¿ risk system combination, an automatic approach
that detects the best parts of the candidate translations and combines them
to generate a consensus translation that is optimal with respect to a particular
performance metric. We thoroughly describe the formalization of our
approach as a weighted ensemble of probability distributions and provide efficient
algorithms to obtain the optimal consensus translation according to the
widespread BLEU score. Empirical results show that the proposed approach
is indeed able to generate statistically better translations than the provided
candidates. Compared to other state-of-the-art systems combination methods,
our approach reports similar performance not requiring any additional data
but the candidate translations.
Then, we focus our attention on how to improve the utility of automatic
translations for the end-user of the system. Since automatic translations are
not perfect, a desirable feature of machine translation systems is the ability
to predict at run-time the quality of the generated translations. Quality estimation
is usually addressed as a regression problem where a quality score
is predicted from a set of features that represents the translation. However, although the concept of translation quality is intuitively clear, there is no
consensus on which are the features that actually account for it. As a consequence,
quality estimation systems for machine translation have to utilize
a large number of weak features to predict translation quality. This involves
several learning problems related to feature collinearity and ambiguity, and
due to the ¿curse¿ of dimensionality. We address these challenges by adopting
a two-step training methodology. First, a dimensionality reduction method
computes, from the original features, the reduced set of features that better
explains translation quality. Then, a prediction model is built from this
reduced set to finally predict the quality score. We study various reduction
methods previously used in the literature and propose two new ones based on
statistical multivariate analysis techniques. More specifically, the proposed dimensionality
reduction methods are based on partial least squares regression.
The results of a thorough experimentation show that the quality estimation
systems estimated following the proposed two-step methodology obtain better
prediction accuracy that systems estimated using all the original features.
Moreover, one of the proposed dimensionality reduction methods obtained the
best prediction accuracy with only a fraction of the original features. This
feature reduction ratio is important because it implies a dramatic reduction
of the operating times of the quality estimation system.
An alternative use of current machine translation systems is to embed them
within an interactive editing environment where the system and a human expert
collaborate to generate error-free translations. This interactive machine
translation approach have shown to reduce supervision effort of the user in
comparison to the conventional decoupled post-edition approach. However,
interactive machine translation considers the translation system as a passive
agent in the interaction process. In other words, the system only suggests translations
to the user, who then makes the necessary supervision decisions. As
a result, the user is bound to exhaustively supervise every suggested translation.
This passive approach ensures error-free translations but it also demands
a large amount of supervision effort from the user.
Finally, we study different techniques to improve the productivity of current
interactive machine translation systems. Specifically, we focus on the development
of alternative approaches where the system becomes an active agent
in the interaction process. We propose two different active approaches. On the
one hand, we describe an active interaction approach where the system informs
the user about the reliability of the suggested translations. The hope is that
this information may help the user to locate translation errors thus improving
the overall translation productivity. We propose different scores to measure translation reliability at the word and sentence levels and study the influence
of such information in the productivity of an interactive machine translation
system. Empirical results show that the proposed active interaction protocol
is able to achieve a large reduction in supervision effort while still generating
translations of very high quality. On the other hand, we study an active learning
framework for interactive machine translation. In this case, the system is
not only able to inform the user of which suggested translations should be
supervised, but it is also able to learn from the user-supervised translations to
improve its future suggestions. We develop a value-of-information criterion to
select which automatic translations undergo user supervision. However, given
its high computational complexity, in practice we study different selection
strategies that approximate this optimal criterion. Results of a large scale experimentation
show that the proposed active learning framework is able to
obtain better compromises between the quality of the generated translations
and the human effort required to obtain them. Moreover, in comparison to
a conventional interactive machine translation system, our proposal obtained
translations of twice the quality with the same supervision effort.González Rubio, J. (2014). On the effective deployment of current machine translation technology [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37888TESI
A Survey of Paraphrasing and Textual Entailment Methods
Paraphrasing methods recognize, generate, or extract phrases, sentences, or
longer natural language expressions that convey almost the same information.
Textual entailment methods, on the other hand, recognize, generate, or extract
pairs of natural language expressions, such that a human who reads (and trusts)
the first element of a pair would most likely infer that the other element is
also true. Paraphrasing can be seen as bidirectional textual entailment and
methods from the two areas are often similar. Both kinds of methods are useful,
at least in principle, in a wide range of natural language processing
applications, including question answering, summarization, text generation, and
machine translation. We summarize key ideas from the two areas by considering
in turn recognition, generation, and extraction methods, also pointing to
prominent articles and resources.Comment: Technical Report, Natural Language Processing Group, Department of
Informatics, Athens University of Economics and Business, Greece, 201
Domain adaptation for statistical machine translation of corporate and user-generated content
The growing popularity of Statistical Machine Translation (SMT) techniques in recent years has led to the development of multiple domain-specic resources and adaptation scenarios. In this thesis we address two important and industrially relevant adaptation scenarios, each suited to different kinds of content.
Initially focussing on professionally edited `enterprise-quality' corporate content, we address a specic scenario of data translation from a mixture of different domains where, for each of them domain-specific data is available. We utilise an automatic classifier to combine multiple domain-specific models and empirically show that such a configuration results in better translation quality compared to both traditional and state-of-the-art techniques for handling mixed domain translation.
In the second phase of our research we shift our focus to the translation of possibly `noisy' user-generated content in web-forums created around products and services of a multinational company. Using professionally edited translation memory (TM) data for training, we use different normalisation and data selection techniques to adapt SMT models to noisy forum content. In this scenario, we also study the effect of mixture adaptation using a combination of in-domain and out-of-domain data at different component levels of an SMT system. Finally we focus on the task of optimal supplementary training data selection from out-of-domain corpora using a novel incremental model merging mechanism to adapt TM-based models to improve forum-content translation quality
- …