717 research outputs found

    Multilingual Neural Translation

    Get PDF
    Machine translation (MT) refers to the technology that can automatically translate contents in one language into other languages. Being an important research area in the field of natural language processing, machine translation has typically been considered one of most challenging yet exciting problems. Thanks to research progress in the data-driven statistical machine translation (SMT), MT is recently capable of providing adequate translation services in many language directions and it has been widely deployed in various practical applications and scenarios. Nevertheless, there exist several drawbacks in the SMT framework. The major drawbacks of SMT lie in its dependency in separate components, its simple modeling approach, and the ignorance of global context in the translation process. Those inherent drawbacks prevent the over-tuned SMT models to gain any noticeable improvements over its horizon. Furthermore, SMT is unable to formulate a multilingual approach in which more than two languages are involved. The typical workaround is to develop multiple pair-wise SMT systems and connect them in a complex bundle to perform multilingual translation. Those limitations have called out for innovative approaches to address them effectively. On the other hand, it is noticeable how research on artificial neural networks has progressed rapidly since the beginning of the last decade, thanks to the improvement in computation, i.e faster hardware. Among other machine learning approaches, neural networks are known to be able to capture complex dependencies and learn latent representations. Naturally, it is tempting to apply neural networks in machine translation. First attempts revolve around replacing SMT sub-components by the neural counterparts. Later attempts are more revolutionary by fundamentally changing the whole core of SMT with neural networks, which is now popularly known as neural machine translation (NMT). NMT is an end-to-end system which directly estimate the translation model between the source and target sentences. Furthermore, it is later discovered to capture the inherent hierarchical structure of natural language. This is the key property of NMT that enables a new training paradigm and a less complex approach for multilingual machine translation using neural models. This thesis plays an important role in the evolutional course of machine translation by contributing to the transition of using neural components in SMT to the completely end-to-end NMT and most importantly being the first of the pioneers in building a neural multilingual translation system. First, we proposed an advanced neural-based component: the neural network discriminative word lexicon, which provides a global coverage for the source sentence during the translation process. We aim to alleviate the problems of phrase-based SMT models that are caused by the way how phrase-pair likelihoods are estimated. Such models are unable to gather information from beyond the phrase boundaries. In contrast, our discriminative word lexicon facilitates both the local and global contexts of the source sentences and models the translation using deep neural architectures. Our model has improved the translation quality greatly when being applied in different translation tasks. Moreover, our proposed model has motivated the development of end-to-end NMT architectures later, where both of the source and target sentences are represented with deep neural networks. The second and also the most significant contribution of this thesis is the idea of extending an NMT system to a multilingual neural translation framework without modifying its architecture. Based on the ability of deep neural networks to modeling complex relationships and structures, we utilize NMT to learn and share the cross-lingual information to benefit all translation directions. In order to achieve that purpose, we present two steps: first in incorporating language information into training corpora so that the NMT learns a common semantic space across languages and then force the NMT to translate into the desired target languages. The compelling aspect of the approach compared to other multilingual methods, however, lies in the fact that our multilingual extension is conducted in the preprocessing phase, thus, no change needs to be done inside the NMT architecture. Our proposed method, a universal approach for multilingual MT, enables a seamless coupling with any NMT architecture, thus makes the multilingual expansion to the NMT systems effortlessly. Our experiments and the studies from others have successfully employed our approach with numerous different NMT architectures and show the universality of the approach. Our multilingual neural machine translation accommodates cross-lingual information in a learned common semantic space to improve altogether every translation direction. It is then effectively applied and evaluated in various scenarios. We develop a multilingual translation system that relies on both source and target data to boost up the quality of a single translation direction. Another system could be deployed as a multilingual translation system that only requires being trained once using a multilingual corpus but is able to translate between many languages simultaneously and the delivered quality is more favorable than many translation systems trained separately. Such a system able to learn from large corpora of well-resourced languages, such as English → German or English → French, has proved to enhance other translation direction of low-resourced language pairs like English → Lithuania or German → Romanian. Even more, we show that kind of approach can be applied to the extreme case of zero-resourced translation where no parallel data is available for training without the need of pivot techniques. The research topics of this thesis are not limited to broadening application scopes of our multilingual approach but we also focus on improving its efficiency in practice. Our multilingual models have been further improved to adequately address the multilingual systems whose number of languages is large. The proposed strategies demonstrate that they are effective at achieving better performance in multi-way translation scenarios with greatly reduced training time. Beyond academic evaluations, we could deploy the multilingual ideas in the lecture-themed spontaneous speech translation service (Lecture Translator) at KIT. Interestingly, a derivative product of our systems, the multilingual word embedding corpus available in a dozen of languages, can serve as a useful resource for cross-lingual applications such as cross-lingual document classification, information retrieval, textual entailment or question answering. Detailed analysis shows excellent performance with regard to semantic similarity metrics when using the embeddings on standard cross-lingual classification tasks

    Combined Spoken Language Translation

    Get PDF
    EU-BRIDGE is a European research project which is aimed at developing innovative speech translation technology. One of the collaborative efforts within EU-BRIDGE is to produce joint submissions of up to four different partners to the evaluation campaign at the 2014 International Workshop on Spoken Language Translation (IWSLT). We submitted combined translations to the German→English spoken language translation (SLT) track as well as to the German→English, English→German and English→French machine translation (MT) tracks. In this paper, we present the techniques which were applied by the different individual translation systems of RWTH Aachen University, the University of Edinburgh, Karlsruhe Institute of Technology, and Fondazione Bruno Kessler. We then show the combination approach developed at RWTH Aachen University which combined the individual systems. The consensus translations yield empirical gains of up to 2.3 points in BLEU and 1.2 points in TER compared to the best individual system

    Online Learning for Effort Reduction in Interactive Neural Machine Translation

    Full text link
    [EN] Neural machine translation systems require large amounts of training data and resources. Even with this, the quality of the translations may be insufficient for some users or domains. In such cases, the output of the system must be revised by a human agent. This can be done in a post-editing stage or following an interactive machine translation protocol. We explore the incremental update of neural machine translation systems during the post-editing or interactive translation processes. Such modifications aim to incorporate the new knowledge, from the edited sentences, into the translation system. Updates to the model are performed on-the-fly, as sentences are corrected, via online learning techniques. In addition, we implement a novel interactive, adaptive system, able to react to single-character interactions. This system greatly reduces the human effort required for obtaining high-quality translations. In order to stress our proposals, we conduct exhaustive experiments varying the amount and type of data available for training. Results show that online learning effectively achieves the objective of reducing the human effort required during the post-editing or the interactive machine translation stages. Moreover, these adaptive systems also perform well in scenarios with scarce resources. We show that a neural machine translation system can be rapidly adapted to a specific domain, exclusively by means of online learning techniques.The authors wish to thank the anonymous reviewers for their valuable criticisms and suggestions. The research leading to these results has received funding from the Generalitat Valenciana under grant PROMETEOII/2014/030 and from TIN2015-70924-C2-1-R. We also acknowledge NVIDIA Corporation for the donation of GPUs used in this work.Peris-Abril, Á.; Casacuberta Nolla, F. (2019). Online Learning for Effort Reduction in Interactive Neural Machine Translation. Computer Speech & Language. 58:98-126. https://doi.org/10.1016/j.csl.2019.04.001S981265

    Streaming cascade-based speech translation leveraged by a direct segmentation model

    Full text link
    [EN] The cascade approach to Speech Translation (ST) is based on a pipeline that concatenates an Automatic Speech Recognition (ASR) system followed by a Machine Translation (MT) system. Nowadays, state-of-the-art ST systems are populated with deep neural networks that are conceived to work in an offline setup in which the audio input to be translated is fully available in advance. However, a streaming setup defines a completely different picture, in which an unbounded audio input gradually becomes available and at the same time the translation needs to be generated under real-time constraints. In this work, we present a state-of-the-art streaming ST system in which neural-based models integrated in the ASR and MT components are carefully adapted in terms of their training and decoding procedures in order to run under a streaming setup. In addition, a direct segmentation model that adapts the continuous ASR output to the capacity of simultaneous MT systems trained at the sentence level is introduced to guarantee low latency while preserving the translation quality of the complete ST system. The resulting ST system is thoroughly evaluated on the real-life streaming Europarl-ST benchmark to gauge the trade-off between quality and latency for each component individually as well as for the complete ST system.The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement no. 761758 (X5Gon) and 952215 (TAILOR); the Government of Spain's research project Multisub, ref. RTI2018-094879-B-I00 (MCIU/AEI/FEDER,EU) and FPU scholarships FPU14/03981 and FPU18/04135; and the Generalitat Valenciana's research project Classroom Activity Recognition, ref. PROMETEO/2019/111 and predoctoral research scholarship ACIF/2017/055.Iranzo-Sánchez, J.; Jorge-Cano, J.; Baquero-Arnal, P.; Silvestre Cerdà, JA.; Giménez Pastor, A.; Civera Saiz, J.; Sanchis Navarro, JA.... (2021). Streaming cascade-based speech translation leveraged by a direct segmentation model. Neural Networks. 142:303-315. https://doi.org/10.1016/j.neunet.2021.05.013S30331514

    Vashantor: A Large-scale Multilingual Benchmark Dataset for Automated Translation of Bangla Regional Dialects to Bangla Language

    Full text link
    The Bangla linguistic variety is a fascinating mix of regional dialects that adds to the cultural diversity of the Bangla-speaking community. Despite extensive study into translating Bangla to English, English to Bangla, and Banglish to Bangla in the past, there has been a noticeable gap in translating Bangla regional dialects into standard Bangla. In this study, we set out to fill this gap by creating a collection of 32,500 sentences, encompassing Bangla, Banglish, and English, representing five regional Bangla dialects. Our aim is to translate these regional dialects into standard Bangla and detect regions accurately. To achieve this, we proposed models known as mT5 and BanglaT5 for translating regional dialects into standard Bangla. Additionally, we employed mBERT and Bangla-bert-base to determine the specific regions from where these dialects originated. Our experimental results showed the highest BLEU score of 69.06 for Mymensingh regional dialects and the lowest BLEU score of 36.75 for Chittagong regional dialects. We also observed the lowest average word error rate of 0.1548 for Mymensingh regional dialects and the highest of 0.3385 for Chittagong regional dialects. For region detection, we achieved an accuracy of 85.86% for Bangla-bert-base and 84.36% for mBERT. This is the first large-scale investigation of Bangla regional dialects to Bangla machine translation. We believe our findings will not only pave the way for future work on Bangla regional dialects to Bangla machine translation, but will also be useful in solving similar language-related challenges in low-resource language conditions

    The audio auditor: user-level membership inference in Internet of Things voice services

    Get PDF
    With the rapid development of deep learning techniques, the popularity of voice services implemented on various Internet of Things (IoT) devices is ever increasing. In this paper, we examine user-level membership inference in the problem space of voice services, by designing an audio auditor to verify whether a specific user had unwillingly contributed audio used to train an automatic speech recognition (ASR) model under strict black-box access. With user representation of the input audio data and their corresponding translated text, our trained auditor is effective in user-level audit. We also observe that the auditor trained on specific data can be generalized well regardless of the ASR model architecture. We validate the auditor on ASR models trained with LSTM, RNNs, and GRU algorithms on two state-of-the-art pipelines, the hybrid ASR system and the end-to-end ASR system. Finally, we conduct a real-world trial of our auditor on iPhone Siri, achieving an overall accuracy exceeding 80%. We hope the methodology developed in this paper and findings can inform privacy advocates to overhaul IoT privacy

    Consecutive Decoding for Speech-to-text Translation

    Full text link
    Speech-to-text translation (ST), which directly translates the source language speech to the target language text, has attracted intensive attention recently. However, the combination of speech recognition and machine translation in a single model poses a heavy burden on the direct cross-modal cross-lingual mapping. To reduce the learning difficulty, we propose COnSecutive Transcription and Translation (COSTT), an integral approach for speech-to-text translation. The key idea is to generate source transcript and target translation text with a single decoder. It benefits the model training so that additional large parallel text corpus can be fully exploited to enhance the speech translation training. Our method is verified on three mainstream datasets, including Augmented LibriSpeech English-French dataset, TED English-German dataset, and TED English-Chinese dataset. Experiments show that our proposed COSTT outperforms the previous state-of-the-art methods. The code is available at https://github.com/dqqcasia/st.Comment: Accepted by AAAI 2021. arXiv admin note: text overlap with arXiv:2009.0970

    Neural versus Phrase-Based Machine Translation Quality: a Case Study

    Get PDF
    Within the field of Statistical Machine Translation (SMT), the neural approach (NMT) has recently emerged as the first technology able to challenge the long-standing dominance of phrase-based approaches (PBMT). In particular, at the IWSLT 2015 evaluation campaign, NMT outperformed well established state-of-the-art PBMT systems on English-German, a language pair known to be particularly hard because of morphology and syntactic differences. To understand in what respects NMT provides better translation quality than PBMT, we perform a detailed analysis of neural versus phrase-based SMT outputs, leveraging high quality post-edits performed by professional translators on the IWSLT data. For the first time, our analysis provides useful insights on what linguistic phenomena are best modeled by neural models -- such as the reordering of verbs -- while pointing out other aspects that remain to be improved

    End-to-End Neural Speech Translation

    Get PDF
    Diese Arbeit beschäftigt sich mit Methoden zur Verbesserung der automatischen Übersetzung gesprochener Sprache (kurz: Speech Translation). Die Eingabe ist hierbei ein akustisches Signal, die Ausgabe ist der zugehörige Text in einer anderen Sprache. Die Anwendungen sind vielfältig und reichen u.a. von dialogbasierten Übersetzungssystemen in begrenzten Domänen bis hin zu vollautomatischen Vorlesungsübersetzungssystemen. Speech Translation ist ein komplexer Vorgang der in der Praxis noch viele Fehler produziert. Ein Grund hierfür ist die Zweiteilung in Spracherkennungskomponente und Übersetzungskomponente: beide Komponenten produzieren für sich genommen eine gewisse Menge an Fehlern, zusätzlich werden die Fehler der ersten Komponente an die zweite Komponente weitergereicht (sog. Error Propagation) was zusätzliche Fehler in der Ausgabe verursacht. Die Vermeidung des Error Propagation Problems ist daher grundlegender Forschungsgegenstand im Speech Translation Bereich. In der Vergangenheit wurden bereits Methoden entwickelt, welche die Schnittstelle zwischen Spracherkenner und Übersetzer verbessern sollen, etwa durch Weiterreichen mehrerer Erkennungshypothesen oder durch Kombination beider Modelle mittels Finite State Transducers. Diese basieren jedoch weitgehend auf veralteten, statistischen Übersetzungsverfahren, die mittlerweile fast vollständig durch komplett neuronale Sequence-to-Sequence Modelle ersetzt wurden. Die vorliegende Dissertation betrachtet mehrere Ansätze zur Verbesserung von Speech Translation, alle motiviert durch das Ziel, Error Propagation zu vermeiden, sowie durch die Herausforderungen und Möglichkeiten der neuen komplett neuronalen Modelle zur Spracherkennung und Übersetzung. Hierbei werden wir zum Teil völlig neuartige Modelle entwickeln und zum Teil Strategien entwickeln um erfolgreiche klassische Ideen auf neuronale Modelle zu übertragen. Wir betrachten zunächst eine einfachere Variante unseres Problems, die Spracherkennung. Um Speech Translation Modelle zu entwickeln die komplett auf neuronalen Sequence-to-Sequence Modellen basieren, müssen wir zunächst sicherstellen dass wir dieses einfachere Problem zufriedenstellend mit ähnlichen Modellen lösen können. Dazu entwickeln wir zunächst ein komplett neuronales Baseline Spracherkennungs-System auf Grundlage von Ergebnissen aus der Literatur, welches wir anschließend durch eine neuartige Self-Attentional Architektur erweitern. Wir zeigen dass wir hiermit sowohl die Trainingszeit verkürzen können, als auch bessere Einblicke in die oft als Blackbox beschriebenen Netze gewinnen und diese aus linguistischer Sicht interpretieren können. Als nächstes widmen wir uns dem kaskadierten Ansatz zur Speech Translation. Hier nehmen wir an, dass eine Ausgabe eines Spracherkenners gegeben ist, und wir diese so akkurat wie möglich übersetzen wollen. Dazu ist es nötig, mit den Fehlern des Spracherkenners umzugehen, was wir erstens durch verbesserte Robustheit des Übersetzers und zweitens durch Betrachten alternativer Erkennungshypothesen erreichen. Die Verbesserung der Robustheit der Übersetzungskomponente, unser erster Beitrag, erreichen wir durch das Verrauschen der Trainings-Eingaben, wodurch das Modell lernt, mit fehlerhaften Eingaben und insbesondere Spracherkennungsfehlern besser umzugehen. Zweitens entwickeln wir ein Lattice-to-Sequence Übersetzungsmodell, also ein Modell welches Wortgraphen als Eingaben erwartet und diese in eine übersetzte Wortsequenz überführt. Dies ermöglicht uns, einen Teil des Hypothesenraums des Spracherkenners, in Form eines eben solchen Wortgraphen, an den Spracherkenner weiterzureichen. Hierdurch hat die Übersetzungskomponente Zugriff auf verschiedene alternative Ausgaben des Spracherkenners und kann im Training lernen, daraus selbständig die zum Übersetzen optimale und weniger fehlerbehaftete Eingabe zu extrahieren. Schließlich kommen wir zum finalen und wichtigsten Beitrag dieser Dissertation. Ein vielversprechender neuer Speech Translation Ansatz ist die direkte Modellierung, d.h. ohne explizite Erzeugung eines Transkripts in der Quellsprache als Zwischenschritt. Hierzu sind direkte Daten, d.h. Tonaufnahmen mit zugehörigen textuellen Übersetzungen nötig, im Unterschied zu kaskadierten Modellen, welche auf transkribierte Tonaufnahmen sowie davon unabhängigen parallelen übersetzten Texten trainiert werden. Erstmals bieten die neuen end-to-end trainierbaren Sequence-to-Sequence Modelle grundsätzlich die Möglichkeit dieses direkten Weges und wurden auch bereits von einigen Forschungsgruppen entsprechend getestet, jedoch sind die Ergebnisse teils widersprüchlich und es bleibt bisher unklar, ob man Verbesserungen gegenüber kaskadierten Systemen erwarten kann. Wir zeigen hier dass dies entscheidend von der Menge der verfügbaren Daten abhängt, was sich leicht dadurch erklären lässt dass direkte Modellierung ein deutlich komplexeres Problem darstellt als der Weg über zwei Schritte. Solche Situationen bedeuten im Maschinellen Lernen oftmals dass mehr Daten benötigt werden. Dies führt uns zu einem fundamentalen Problem dieses ansonsten sehr vielversprechenden Ansatzes, nämlich dass mehr direkte Trainingsdaten benötigt werden, obwohl diese in der Praxis sehr viel schwieriger zu sammeln sind als Trainingsdaten für traditionelle Systeme. Als Ausweg testen wir zunächst eine naheliegende Strategie, weitere traditionelle Daten ins direkte Modell-Training zu integrieren: Multi-Task Training. Dies stellt sich in unseren Experimenten allerdings als unzureichend heraus. Wir entwickeln daher ein neues Modell, das ähnlich einer Kaskade auf zwei Modellierungsschritten basiert, jedoch komplett durch Backpropagation trainiert wird und dabei bei der Übersetzung nur auf Audio-Kontextvektoren zurückgreift und damit nicht durch Erkennungsfehler beeinträchtigt wird. Wir zeigen dass dieses Modell erstens unter idealen Datenkonditionen bessere Ergebnisse gegenüber vergleichbaren direkten und kaskadierten Modellen erzielt, und zweitens deutlich mehr von zusätzlichen traditionellen Daten profitiert als die einfacheren direkten Modelle. Wir zeigen damit erstmals, dass end-to-end trainierbare Speech Translation Modelle eine ernst zu nehmende und praktisch relevante Alternative für traditionelle Ansätze sind
    corecore