909 research outputs found
A Formal Model of Ambiguity and its Applications in Machine Translation
Systems that process natural language must cope with and resolve ambiguity. In this dissertation, a model of language processing is advocated in which multiple inputs and multiple analyses of inputs are considered concurrently and a single analysis is only a last resort. Compared to conventional models, this approach can be understood as replacing single-element inputs and outputs with weighted sets of inputs and outputs. Although processing components must deal with sets (rather than individual elements), constraints are imposed on the elements of these sets, and the representations from existing models may be reused. However, to deal efficiently with large (or infinite) sets, compact representations of sets that share structure between elements, such as weighted finite-state transducers and synchronous context-free grammars, are necessary. These representations and algorithms for manipulating them are discussed in depth in depth.
To establish the effectiveness and tractability of the proposed processing model, it is applied to several problems in machine translation. Starting with spoken language translation, it is shown that translating a set of transcription hypotheses yields better translations compared to a baseline in which a single (1-best) transcription hypothesis is selected and then translated, independent of the translation model formalism used. More subtle forms of ambiguity that arise even in text-only translation (such as decisions conventionally made during system development about how to preprocess text) are then discussed, and it is shown that the ambiguity-preserving paradigm can be employed in these cases as well, again leading to improved translation quality. A model for supervised learning that learns from training data where sets (rather than single elements) of correct labels are provided for each training instance and use it to learn a model of compound word segmentation is also introduced, which is used as a preprocessing step in machine translation
Automated Testing of Speech-to-Speech Machine Translation in Telecom Networks
Globalisoituvassa maailmassa kyky kommunikoida kielimuurien yli käy yhä tärkeämmäksi. Kielten opiskelu on työlästä ja siksi halutaan kehittää automaattisia konekäännösjärjestelmiä. Ericsson on kehittänyt prototyypin nimeltä Real-Time Interpretation System (RTIS), joka toimii mobiiliverkossa ja kääntää matkailuun liittyviä fraaseja puhemuodossa kahden kielen välillä.
Nykyisten konekäännösjärjestelmien suorituskyky on suhteellisen huono ja siksi testauksella on suuri merkitys järjestelmien suunnittelussa. Testauksen tarkoituksena on varmistaa, että järjestelmä säilyttää käännösekvivalenssin sekä puhekäännösjärjestelmän tapauksessa myös riittävän puheenlaadun. Luotettavimmin testaus voidaan suorittaa ihmisten antamiin arviointeihin perustuen, mutta tällaisen testauksen kustannukset ovat suuria ja tulokset subjektiivisia.
Tässä työssä suunniteltiin ja analysoitiin automatisoitu testiympäristö Real-Time Interpretation System -käännösprototyypille. Tavoitteina oli tutkia, voidaanko testaus suorittaa automatisoidusti ja pystytäänkö todellinen, käyttäjän havaitsema käännösten laatu mittaamaan automatisoidun testauksen keinoin.
Tulokset osoittavat että mobiiliverkoissa puheenlaadun testaukseen käytetyt menetelmät eivät ole optimaalisesti sovellettavissa konekäännösten testaukseen. Nykytuntemuksen mukaan ihmisten suorittama arviointi on ainoa luotettava tapa mitata käännösekvivalenssia ja puheen ymmärrettävyyttä. Konekäännösten testauksen automatisointi vaatii lisää tutkimusta, jota ennen subjektiivinen arviointi tulisi säilyttää ensisijaisena testausmenetelmänä RTIS-testauksessa.In the globalizing world, the ability to communicate over language barriers is increasingly important. Learning languages is laborious, which is why there is a strong desire to develop automatic machine translation applications. Ericsson has developed a speech-to-speech translation prototype called the Real-Time Interpretation System (RTIS). The service runs in a mobile network and translates travel phrases between two languages in speech format.
The state-of-the-art machine translation systems suffer from a relatively poor performance and therefore evaluation plays a big role in machine translation development. The purpose of evaluation is to ensure the system preserves the translational equivalence, and in case of a speech-to-speech system, the speech quality. The evaluation is most reliably done by human judges. However, human-conducted evaluation is costly and subjective.
In this thesis, a test environment for Ericsson Real-Time Interpretation System prototype is designed and analyzed. The goals are to investigate if the RTIS verification can be conducted automatically, and if the test environment can truthfully measure the end-to-end performance of the system.
The results conclude that methods used in end-to-end speech quality verification in mobile networks can not be optimally adapted for machine translation evaluation. With current knowledge, human-conducted evaluation is the only method that can truthfully measure translational equivalence and the speech intelligibility. Automating machine translation evaluation needs further research, until which human-conducted evaluation should remain the preferred method in RTIS verification
Continuous spaces in statistical machine Translation
[EN] Classically, statistical machine translation relied on representations of words in a
discrete space. Words and phrases were atomically represented as indices in a
vector. In the last years, techniques for representing words and phrases in a
continuous space have arisen. In this scenario, a word is represented in the
continuous space as a real-valued, dense and low-dimensional vector. Statistical
models can profit from this richer representation, since it is able to naturally take
into account concepts such as semantic or syntactic relationships between words
and phrases. This approach is encouraging, but it also entails new challenges.
In this work, a language model which relies on continuous representations of
words is developed. Such model makes use of a bidirectional recurrent neural
network, which is able to take into account both the past and the future context
of words. Since the model is costly to train, the training dataset is reduced by
using bilingual sentence selection techniques. Two selection methods are used
and compared. The language model is then used to rerank translation
hypotheses. Results show improvements on the translation quality.
Moreover, a new approach for machine translation has been recently proposed:
The so-called neural machine translation. It consists in the sole use of a large
neural network for carrying out the translation process. In this work, such novel
model is compared to the existing phrase-based approaches of statistical
machine translation.
Finally, the neural translation models are combined with diverse machine
translation systems, in order to provide a consensus translation, which aim to
improve the translation given by each single system.[ES] Los sistemas clásicos de traducción automática estadística están basados en
representaciones de palabras en un espacio discreto. Palabras y segmentos se
representan como índices en un vector. Durante los últimos años han surgido
técnicas para realizar la representación de palabras y segmentos en un espacio
continuo. En este escenario, una palabra se representa en el espacio continuo
como un vector de valores reales, denso y de baja dimensión. Los modelos
estadísticos pueden aprovecharse de esta representación más rica, puesto que
incluye de forma natural conceptos semánticos o relaciones sintácticas entre
palabras y segmentos. Esta aproximación es prometedora, pero también conlleva
nuevos retos.
En este trabajo se desarrolla un modelo de lenguaje basado en representaciones
continuas de palabras. Dicho modelo emplea una red neuronal recurrente
bidireccional, la cual es capaz de considerar tanto el contexto pasado como el
contexto futuro de las palabras. Debido a que este modelo es costoso de
entrenar, se emplea un conjunto de entrenamiento reducido mediante técnicas
de selección de frases bilingües. Se emplean y comparan dos métodos de
selección. Una vez entrenado, el modelo se emplea para reordenar hipótesis de
traducción. Los resultados muestran mejoras en la calidad de la traducción.
Por otro lado, recientemente se propuso una nueva aproximación a la traducción
automática: la llamada traducción automática neuronal. Consiste en el uso
exclusivo de una gran red neuronal para llevar a cabo el proceso de traducción.
En este trabajo, este nuevo modelo se compara al paradigma actual de
traducción basada en segmentos.
Finalmente, los modelos de traducción neuronales son combinados con otros
sistemas de traducción automática, para ofrecer una traducción consensuada,
que busca mejorar las traducciones individuales que cada sistema ofrecePeris Abril, Á. (2015). Continuous spaces in statistical machine Translation. http://hdl.handle.net/10251/68448Archivo delegad
On the effective deployment of current machine translation technology
Machine translation is a fundamental technology that is gaining more importance
each day in our multilingual society. Companies and particulars are
turning their attention to machine translation since it dramatically cuts down
their expenses on translation and interpreting. However, the output of current
machine translation systems is still far from the quality of translations generated
by human experts. The overall goal of this thesis is to narrow down
this quality gap by developing new methodologies and tools that improve the
broader and more efficient deployment of machine translation technology.
We start by proposing a new technique to improve the quality of the
translations generated by fully-automatic machine translation systems. The
key insight of our approach is that different translation systems, implementing
different approaches and technologies, can exhibit different strengths and
limitations. Therefore, a proper combination of the outputs of such different
systems has the potential to produce translations of improved quality.
We present minimum Bayes¿ risk system combination, an automatic approach
that detects the best parts of the candidate translations and combines them
to generate a consensus translation that is optimal with respect to a particular
performance metric. We thoroughly describe the formalization of our
approach as a weighted ensemble of probability distributions and provide efficient
algorithms to obtain the optimal consensus translation according to the
widespread BLEU score. Empirical results show that the proposed approach
is indeed able to generate statistically better translations than the provided
candidates. Compared to other state-of-the-art systems combination methods,
our approach reports similar performance not requiring any additional data
but the candidate translations.
Then, we focus our attention on how to improve the utility of automatic
translations for the end-user of the system. Since automatic translations are
not perfect, a desirable feature of machine translation systems is the ability
to predict at run-time the quality of the generated translations. Quality estimation
is usually addressed as a regression problem where a quality score
is predicted from a set of features that represents the translation. However, although the concept of translation quality is intuitively clear, there is no
consensus on which are the features that actually account for it. As a consequence,
quality estimation systems for machine translation have to utilize
a large number of weak features to predict translation quality. This involves
several learning problems related to feature collinearity and ambiguity, and
due to the ¿curse¿ of dimensionality. We address these challenges by adopting
a two-step training methodology. First, a dimensionality reduction method
computes, from the original features, the reduced set of features that better
explains translation quality. Then, a prediction model is built from this
reduced set to finally predict the quality score. We study various reduction
methods previously used in the literature and propose two new ones based on
statistical multivariate analysis techniques. More specifically, the proposed dimensionality
reduction methods are based on partial least squares regression.
The results of a thorough experimentation show that the quality estimation
systems estimated following the proposed two-step methodology obtain better
prediction accuracy that systems estimated using all the original features.
Moreover, one of the proposed dimensionality reduction methods obtained the
best prediction accuracy with only a fraction of the original features. This
feature reduction ratio is important because it implies a dramatic reduction
of the operating times of the quality estimation system.
An alternative use of current machine translation systems is to embed them
within an interactive editing environment where the system and a human expert
collaborate to generate error-free translations. This interactive machine
translation approach have shown to reduce supervision effort of the user in
comparison to the conventional decoupled post-edition approach. However,
interactive machine translation considers the translation system as a passive
agent in the interaction process. In other words, the system only suggests translations
to the user, who then makes the necessary supervision decisions. As
a result, the user is bound to exhaustively supervise every suggested translation.
This passive approach ensures error-free translations but it also demands
a large amount of supervision effort from the user.
Finally, we study different techniques to improve the productivity of current
interactive machine translation systems. Specifically, we focus on the development
of alternative approaches where the system becomes an active agent
in the interaction process. We propose two different active approaches. On the
one hand, we describe an active interaction approach where the system informs
the user about the reliability of the suggested translations. The hope is that
this information may help the user to locate translation errors thus improving
the overall translation productivity. We propose different scores to measure translation reliability at the word and sentence levels and study the influence
of such information in the productivity of an interactive machine translation
system. Empirical results show that the proposed active interaction protocol
is able to achieve a large reduction in supervision effort while still generating
translations of very high quality. On the other hand, we study an active learning
framework for interactive machine translation. In this case, the system is
not only able to inform the user of which suggested translations should be
supervised, but it is also able to learn from the user-supervised translations to
improve its future suggestions. We develop a value-of-information criterion to
select which automatic translations undergo user supervision. However, given
its high computational complexity, in practice we study different selection
strategies that approximate this optimal criterion. Results of a large scale experimentation
show that the proposed active learning framework is able to
obtain better compromises between the quality of the generated translations
and the human effort required to obtain them. Moreover, in comparison to
a conventional interactive machine translation system, our proposal obtained
translations of twice the quality with the same supervision effort.González Rubio, J. (2014). On the effective deployment of current machine translation technology [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37888TESI
Hybrid machine translation using binary classification models trained on joint, binarised feature vectors
We describe the design and implementation of a system combination method for machine translation output. It is based on sentence selection using binary classification models estimated on joint, binarised feature vectors. By contrast to existing system combination methods which work by dividing candidate translations into n-grams, i.e., sequences of n words or tokens, our framework performs sentence selection which does not alter the selected, best translation. First, we investigate the potential performance gain attainable by optimal sentence selection. To do so, we conduct the largest meta-study on data released by the yearly Workshop on Statistical Machine Translation (WMT). Second, we introduce so-called joint, binarised feature vectors which explicitly model feature value comparison for two systems A, B. We compare different settings for training binary classifiers using single, joint, as well as joint, binarised feature vectors. After having shown the potential of both selection and binarisation as methodological paradigms, we combine these two into a combination framework which applies pairwise comparison of all candidate systems to determine the best translation for each individual sentence. Our system is able to outperform other state-of-the-art system combination approaches; this is confirmed by our experiments. We conclude by summarising the main findings and contributions of our thesis and by giving an outlook to future research directions.Wir beschreiben den Entwurf und die Implementierung eines Systems zur Kombination von Übersetzungen auf Basis nicht modifizierender Auswahl gegebener Kandidaten. Die zugehörigen, binären Klassifikationsmodelle werden unter Verwendung von gemeinsamen, binärisierten Merkmalsvektoren trainiert. Im Gegensatz zu anderen Methoden zur Systemkombination, die die gegebenen Kandidatenübersetzungen in n-Gramme, d.h., Sequenzen von n Worten oder Symbolen zerlegen, funktioniert unser Ansatz mit Hilfe von nicht modifizierender Auswahl der besten Übersetzung. Zuerst untersuchen wir das Potenzial eines solches Ansatzes im Hinblick auf die maximale theoretisch mögliche Verbesserung und führen die größte Meta-Studie auf Daten, welche jährlich im Rahmen der Arbeitstreffen zur Statistischen Maschinellen Übersetzung (WMT) veröffentlicht worden sind, durch. Danach definieren wir sogenannte gemeinsame, binärisierte Merkmalsvektoren, welche explizit den Merkmalsvergleich zweier Systeme A, B modellieren. Wir vergleichen verschiedene Konfigurationen zum Training binärer Klassifikationsmodelle basierend auf einfachen, gemeinsamen, sowie gemeinsamen, binärisierten Merkmalsvektoren. Abschließend kombinieren wir beide Verfahren zu einer Methodik, die paarweise Vergleiche aller Quellsysteme zur Bestimmung der besten Übesetzung einsetzt. Wir schließen mit einer Zusammenfassung und einem Ausblick auf zukünftige Forschungsthemen
A Hybrid Machine Translation Framework for an Improved Translation Workflow
Over the past few decades, due to a continuing surge in the amount of content being translated and ever increasing pressure to deliver high quality and high throughput translation, translation industries are focusing their interest on adopting advanced technologies such as machine translation (MT), and automatic post-editing (APE) in their translation workflows. Despite the progress of the technology, the roles of humans and machines essentially remain intact as MT/APE are moving from the peripheries of the translation field closer towards collaborative human-machine based MT/APE in modern translation workflows. Professional translators increasingly become post-editors correcting raw MT/APE output instead of translating from scratch which in turn increases productivity in terms of translation speed. The last decade has seen substantial growth in research and development activities on improving MT; usually concentrating on selected aspects of workflows starting from training data pre-processing techniques to core MT processes to post-editing methods. To date, however, complete MT workflows are less investigated than the core MT processes. In the research presented in this thesis, we investigate avenues towards achieving improved MT workflows. We study how different MT paradigms can be utilized and integrated to best effect. We also investigate how different upstream and downstream component technologies can be hybridized to achieve overall improved MT. Finally we include an investigation into human-machine collaborative MT by taking humans in the loop. In many of (but not all) the experiments presented in this thesis we focus on data scenarios provided by low resource language settings.Aufgrund des stetig ansteigenden Übersetzungsvolumens in den letzten Jahrzehnten und
gleichzeitig wachsendem Druck hohe Qualität innerhalb von kürzester Zeit liefern zu
müssen sind Übersetzungsdienstleister darauf angewiesen, moderne Technologien wie
Maschinelle Übersetzung (MT) und automatisches Post-Editing (APE) in den Übersetzungsworkflow
einzubinden. Trotz erheblicher Fortschritte dieser Technologien haben
sich die Rollen von Mensch und Maschine kaum verändert. MT/APE ist jedoch nunmehr
nicht mehr nur eine Randerscheinung, sondern wird im modernen Übersetzungsworkflow
zunehmend in Zusammenarbeit von Mensch und Maschine eingesetzt. Fachübersetzer
werden immer mehr zu Post-Editoren und korrigieren den MT/APE-Output, statt wie
bisher Übersetzungen komplett neu anzufertigen. So kann die Produktivität bezüglich
der Übersetzungsgeschwindigkeit gesteigert werden. Im letzten Jahrzehnt hat sich in den
Bereichen Forschung und Entwicklung zur Verbesserung von MT sehr viel getan: Einbindung
des vollständigen Übersetzungsworkflows von der Vorbereitung der Trainingsdaten
über den eigentlichen MT-Prozess bis hin zu Post-Editing-Methoden. Der vollständige
Übersetzungsworkflow wird jedoch aus Datenperspektive weit weniger berücksichtigt
als der eigentliche MT-Prozess. In dieser Dissertation werden Wege hin zum
idealen oder zumindest verbesserten MT-Workflow untersucht. In den Experimenten
wird dabei besondere Aufmertsamfit auf die speziellen Belange von sprachen mit geringen
ressourcen gelegt. Es wird untersucht wie unterschiedliche MT-Paradigmen verwendet
und optimal integriert werden können. Des Weiteren wird dargestellt wie unterschiedliche
vor- und nachgelagerte Technologiekomponenten angepasst werden können, um insgesamt
einen besseren MT-Output zu generieren. Abschließend wird gezeigt wie der Mensch in
den MT-Workflow intergriert werden kann. Das Ziel dieser Arbeit ist es verschiedene
Technologiekomponenten in den MT-Workflow zu integrieren um so einen verbesserten
Gesamtworkflow zu schaffen. Hierfür werden hauptsächlich Hybridisierungsansätze verwendet.
In dieser Arbeit werden außerdem Möglichkeiten untersucht, Menschen effektiv
als Post-Editoren einzubinden
- …