2,049 research outputs found
Compact Personalized Models for Neural Machine Translation
We propose and compare methods for gradient-based domain adaptation of
self-attentive neural machine translation models. We demonstrate that a large
proportion of model parameters can be frozen during adaptation with minimal or
no reduction in translation quality by encouraging structured sparsity in the
set of offset tensors during learning via group lasso regularization. We
evaluate this technique for both batch and incremental adaptation across
multiple data sets and language pairs. Our system architecture - combining a
state-of-the-art self-attentive model with compact domain adaptation - provides
high quality personalized machine translation that is both space and time
efficient.Comment: Published at the 2018 Conference on Empirical Methods in Natural
Language Processin
Leveraging online user feedback to improve statistical machine translation
In this article we present a three-step methodology for dynamically improving a statistical machine translation (SMT) system by incorporating human feedback in the form of free edits on the system translations. We target at feedback provided by casual users, which is typically error-prone. Thus, we first propose a filtering step to automatically identify the better user-edited translations and discard the useless ones. A second step produces a pivot-based alignment between source and user-edited sentences, focusing on the errors made by the system. Finally, a third step produces a new translation model and combines it linearly with the one from the original system. We perform a thorough evaluation on a real-world dataset collected from the Reverso.net translation service and show that every step in our methodology contributes significantly to improve a general purpose SMT system. Interestingly, the quality improvement is not only due to the increase of lexical coverage, but to a better lexical selection, reordering, and morphology. Finally, we show the robustness of the methodology by applying it to a different scenario, in which the new examples come from an automatically Web-crawled parallel corpus. Using exactly the same architecture and models provides again a significant improvement of the translation quality of a general purpose baseline SMT system
eSCAPE: a Large-scale Synthetic Corpus for Automatic Post-Editing
Training models for the automatic correction of machine-translated text
usually relies on data consisting of (source, MT, human post- edit) triplets
providing, for each source sentence, examples of translation errors with the
corresponding corrections made by a human post-editor. Ideally, a large amount
of data of this kind should allow the model to learn reliable correction
patterns and effectively apply them at test stage on unseen (source, MT) pairs.
In practice, however, their limited availability calls for solutions that also
integrate in the training process other sources of knowledge. Along this
direction, state-of-the-art results have been recently achieved by systems
that, in addition to a limited amount of available training data, exploit
artificial corpora that approximate elements of the "gold" training instances
with automatic translations. Following this idea, we present eSCAPE, the
largest freely-available Synthetic Corpus for Automatic Post-Editing released
so far. eSCAPE consists of millions of entries in which the MT element of the
training triplets has been obtained by translating the source side of
publicly-available parallel corpora, and using the target side as an artificial
human post-edit. Translations are obtained both with phrase-based and neural
models. For each MT paradigm, eSCAPE contains 7.2 million triplets for
English-German and 3.3 millions for English-Italian, resulting in a total of
14,4 and 6,6 million instances respectively. The usefulness of eSCAPE is proved
through experiments in a general-domain scenario, the most challenging one for
automatic post-editing. For both language directions, the models trained on our
artificial data always improve MT quality with statistically significant gains.
The current version of eSCAPE can be freely downloaded from:
http://hltshare.fbk.eu/QT21/eSCAPE.html.Comment: Accepted at LREC 201
Improving the translation environment for professional translators
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side.
This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project
Online learning via dynamic reranking for Computer Assisted Translation
New techniques for online adaptation in computer assisted translation are explored and compared to previously existing approaches. Under the online adaptation paradigm, the translation system needs to adapt itself to real-world changing scenarios, where training and tuning may only take place once, when the system is set-up for the first time. For this purpose, post-edit information, as described by a given quality measure, is used as valuable feedback within a dynamic reranking algorithm. Two possible approaches are presented and evaluated. The first one relies on the well-known perceptron algorithm, whereas the second one is a novel approach using the Ridge regression in order to compute the optimum scaling factors within a state-of-the-art SMT system. Experimental results show that such algorithms are able to improve translation quality by learning from the errors produced by the system on a sentence-by-sentence basis.This paper is based upon work supported by the EC (FEDER/FSE) and the Spanish
MICINN under projects MIPRCV “Consolider Ingenio 2010” (CSD2007-00018)
and iTrans2 (TIN2009-14511). Also supported by the Spanish MITyC under the erudito.com
(TSI-020110-2009-439) project, by the Generalitat Valenciana under grant
Prometeo/2009/014 and scholarship GV/2010/067 and by the UPV under grant 20091027MartĂnez GĂłmez, P.; Sanchis Trilles, G.; Casacuberta Nolla, F. (2011). Online learning via dynamic reranking for Computer Assisted Translation. En Computational Linguistics and Intelligent Text Processing. Springer Verlag (Germany). 6609:93-105. https://doi.org/10.1007/978-3-642-19437-5_8S931056609Brown, P., Pietra, S.D., Pietra, V.D., Mercer, R.: The mathematics of machine translation. In: Computational Linguistics, vol. 19, pp. 263–311 (1993)Zens, R., Och, F.J., Ney, H.: Phrase-based statistical machine translation. In: Jarke, M., Koehler, J., Lakemeyer, G. (eds.) KI 2002. LNCS (LNAI), vol. 2479, pp. 18–32. Springer, Heidelberg (2002)Koehn, P., Och, F.J., Marcu, D.: Statistical phrase-based translation. In: Proc. HLT/NAACL 2003, pp. 48–54 (2003)Callison-Burch, C., Fordyce, C., Koehn, P., Monz, C., Schroeder, J.: (meta-) evaluation of machine translation. In: Proc. of the Workshop on SMT. ACL, pp. 136–158 (2007)Papineni, K., Roukos, S., Ward, T.: Maximum likelihood and discriminative training of direct translation models. In: Proc. of ICASSP 1988, pp. 189–192 (1998)Och, F., Ney, H.: Discriminative training and maximum entropy models for statistical machine translation. In: Proc. of the ACL 2002, pp. 295–302 (2002)Och, F., Zens, R., Ney, H.: Efficient search for interactive statistical machine translation. In: Proc. of EACL 2003, pp. 387–393 (2003)Sanchis-Trilles, G., Casacuberta, F.: Log-linear weight optimisation via bayesian adaptation in statistical machine translation. In: Proceedings of COLING 2010, Beijing, China (2010)Callison-Burch, C., Bannard, C., Schroeder, J.: Improving statistical translation through editing. In: Proc. of 9th EAMT Workshop Broadening Horizons of Machine Translation and its Applications, Malta (2004)Barrachina, S., et al.: Statistical approaches to computer-assisted translation. Computational Linguistics 35, 3–28 (2009)Casacuberta, F., et al.: Human interaction for high quality machine translation. Communications of the ACM 52, 135–138 (2009)Ortiz-MartĂnez, D., GarcĂa-Varea, I., Casacuberta, F.: Online learning for interactive statistical machine translation. In: Proceedings of NAACL HLT, Los Angeles (2010)España-Bonet, C., MĂ rquez, L.: Robust estimation of feature weights in statistical machine translation. In: 14th Annual Conference of the EAMT (2010)Reverberi, G., Szedmak, S., Cesa-Bianchi, N., et al.: Deliverable of package 4: Online learning algorithms for computer-assisted translation (2008)Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passive-aggressive algorithms. Journal of Machine Learning Research 7, 551–585 (2006)Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A study of translation edit rate with targeted human annotation. In: Proc. of AMTA, Cambridge, MA, USA (2006)Papineni, K., Roukos, S., Ward, T., Zhu, W.: Bleu: A method for automatic evaluation of machine translation. In: Proc. of ACL 2002 (2002)Rosenblatt, F.: The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review 65, 386–408 (1958)Collins, M.: Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In: EMNLP 2002, Philadelphia, PA, USA, pp. 1–8 (2002)Koehn, P.: Europarl: A parallel corpus for statistical machine translation. In: Proc. of the MT Summit X, pp. 79–86 (2005)Koehn, P., et al.: Moses: Open source toolkit for statistical machine translation. In: Proc. of the ACL Demo and Poster Sessions, Prague, Czech Republic, pp. 177–180 (2007)Och, F.: Minimum error rate training for statistical machine translation. In: Proc. of ACL 2003, pp. 160–167 (2003)Kneser, R., Ney, H.: Improved backing-off for m-gram language modeling. In: IEEE Int. Conf. on Acoustics, Speech and Signal Processing II, pp. 181–184 (1995)Stolcke, A.: SRILM – an extensible language modeling toolkit. In: Proc. of ICSLP 2002, pp. 901–904 (2002
- …