19,855 research outputs found

    Robust Tuning Datasets for Statistical Machine Translation

    Full text link
    We explore the idea of automatically crafting a tuning dataset for Statistical Machine Translation (SMT) that makes the hyper-parameters of the SMT system more robust with respect to some specific deficiencies of the parameter tuning algorithms. This is an under-explored research direction, which can allow better parameter tuning. In this paper, we achieve this goal by selecting a subset of the available sentence pairs, which are more suitable for specific combinations of optimizers, objective functions, and evaluation measures. We demonstrate the potential of the idea with the pairwise ranking optimization (PRO) optimizer, which is known to yield too short translations. We show that the learning problem can be alleviated by tuning on a subset of the development set, selected based on sentence length. In particular, using the longest 50% of the tuning sentences, we achieve two-fold tuning speedup, and improvements in BLEU score that rival those of alternatives, which fix BLEU+1's smoothing instead.Comment: RANLP-201

    Discriminative ridge regression algorithm for adaptation in statistical machine translation

    Full text link
    [EN] We present a simple and reliable method for estimating the log-linear weights of a state-of-the-art machine translation system, which takes advantage of the method known as discriminative ridge regression (DRR). Since inappropriate weight estimations lead to a wide variability of translation quality results, reaching a reliable estimate for such weights is critical for machine translation research. For this reason, a variety of methods have been proposed to reach reasonable estimates. In this paper, we present an algorithmic description and empirical results proving that DRR is able to provide comparable translation quality when compared to state-of-the-art estimation methods [i.e. MERT and MIRA], with a reduction in computational cost. Moreover, the empirical results reported are coherent across different corpora and language pairs.The research leading to these results were partially supported by projects CoMUN-HaT-TIN2015-70924-C2-1-R (MINECO/FEDER) and PROMETEO/2018/004. We also acknowledge NVIDIA for the donation of a GPU used in this work.Chinea-Ríos, M.; Sanchis-Trilles, G.; Casacuberta Nolla, F. (2019). Discriminative ridge regression algorithm for adaptation in statistical machine translation. Pattern Analysis and Applications. 22(4):1293-1305. https://doi.org/10.1007/s10044-018-0720-5S12931305224Barrachina S, Bender O, Casacuberta F, Civera J, Cubel E, Khadivi S, Lagarda A, Ney H, Tomás J, Vidal E et al (2009) Statistical approaches to computer-assisted translation. Comput Ling 35(1):3–28Bojar O, Buck C, Federmann C, Haddow B, Koehn P, Monz C, Post M, Specia L (eds) (2014) Proceedings of the ninth workshop on statistical machine translation. Association for Computational LinguisticsBrown PF, Pietra VJD, Pietra SAD, Mercer RL (1993) The mathematics of statistical machine translation: parameter estimation. Comput Ling 19:263–311Callison-Burch C, Koehn P, Monz C, Peterson K, Przybocki M, Zaidan OF (2010) Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation. In: Proceedings of the annual meeting of the association for computational linguistics, pp 17–53Chen B, Cherry C (2014) A systematic comparison of smoothing techniques for sentence-level bleu. In: Proceedings of the workshop on statistical machine translation, pp 362–367Cherry C, Foster G (2012) Batch tuning strategies for statistical machine translation. In: Proceedings of the North American chapter of the association for computational linguistics, pp 427–436Clark JH, Dyer C, Lavie A, Smith NA (2011) Better hypothesis testing for statistical machine translation: controlling for optimizer instability. In: Proceedings of the annual meeting of the association for computational linguistics, pp 176–181Crammer K, Dekel O, Keshet J, Shalev-Shwartz S, Singer Y (2006) Online passive-aggressive algorithms. J Mach Learn Res 7:551–585Hasler E, Haddow B, Koehn P (2011) Margin infused relaxed algorithm for moses. Prague Bull Math Ling 96:69–78Hopkins M, May J (2011) Tuning as ranking. In: Proceedings of the conference on empirical methods in natural language processing, pp 1352–1362Kneser R, Ney H (1995) Improved backing-off for m-gram language modeling. In: Proceedings of the international conference on acoustics, speech and signal processing, pp 181–184Koehn P (2005) Europarl: a parallel corpus for statistical machine translation. In: Proceedings of the machine translation summit, pp 79–86Koehn P (2010) Statistical machine translation. Cambridge University Press, CambridgeKoehn P, Hoang H, Birch A, Callison-Burch C, Federico M, Bertoldi N, Cowan B, Shen W, Moran C, Zens R, Dyer C, Bojar O, Constantin A, Herbst E (2007) Moses: open source toolkit for statistical machine translation. In: Proceedings of the annual meeting of the association for computational linguistics, pp 177–180Lavie MDA (2014) Meteor universal: language specific translation evaluation for any target language. In: Proceedings of the annual meeting of the association for computational linguistics, pp 376–387Marie B, Max A (2015) Multi-pass decoding with complex feature guidance for statistical machine translation. In: Proceedings of the annual meeting of the association for computational linguistics, pp 554–559Martínez-Gómez P, Sanchis-Trilles G, Casacuberta F (2012) Online adaptation strategies for statistical machine translation in post-editing scenarios. Pattern Recogn 45(9):3193–3203Nakov P, Vogel S (2017) Robust tuning datasets for statistical machine translation. arXiv:1710.00346Neubig G, Watanabe T (2016) Optimization for statistical machine translation: a survey. Comput Ling 42(1):1–54Och FJ (2003) Minimum error rate training in statistical machine translation. In: Proceedings of the annual meeting of the association for computational linguistics, pp 160–167Och FJ, Ney H (2003) A systematic comparison of various statistical alignment models. Comput Ling 29:19–51Papineni K, Roukos S, Ward T, Zhu WJ (2002) Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the international conference on acoustics, speech and signal processing, pp 311–318Sanchis-Trilles G, Casacuberta F (2010) Log-linear weight optimisation via Bayesian adaptation in statistical machine translation. In: Proceedings of the annual meeting of the association for computational linguistics, pp 1077–1085Sanchis-Trilles G, Casacuberta F (2015) Improving translation quality stability using Bayesian predictive adaptation. Comput Speech Lang 34(1):1–17Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A study of translation edit rate with targeted human annotation. In: Proceedings of the annual meeting of the association for machine translation in the Americas, pp 223–231Sokolov A, Yvon F (2011) Minimum error rate training semiring. In: Proceedings of the annual conference of the European association for machine translation, pp 241–248Stauffer C, Grimson WEL (2000) Learning patterns of activity using real-time tracking. Pattern Anal Mach Intell 22(8):747–757Stolcke A (2002) Srilm—an extensible language modeling toolkit. In: Proceedings of the international conference on spoken language processing, pp 901–904Tiedemann J (2009) News from opus—a collection of multilingual parallel corpora with tools and interfaces. In: Proceedings of the recent advances in natural language processing, pp 237–248Tiedemann J (2012) Parallel data, tools and interfaces in opus. In: Proceedings of the language resources and evaluation conference, pp 2214–221

    Competence-based Curriculum Learning for Neural Machine Translation

    Get PDF
    Current state-of-the-art NMT systems use large neural networks that are not only slow to train, but also often require many heuristics and optimization tricks, such as specialized learning rate schedules and large batch sizes. This is undesirable as it requires extensive hyperparameter tuning. In this paper, we propose a curriculum learning framework for NMT that reduces training time, reduces the need for specialized heuristics or large batch sizes, and results in overall better performance. Our framework consists of a principled way of deciding which training samples are shown to the model at different times during training, based on the estimated difficulty of a sample and the current competence of the model. Filtering training samples in this manner prevents the model from getting stuck in bad local optima, making it converge faster and reach a better solution than the common approach of uniformly sampling training examples. Furthermore, the proposed method can be easily applied to existing NMT models by simply modifying their input data pipelines. We show that our framework can help improve the training time and the performance of both recurrent neural network models and Transformers, achieving up to a 70% decrease in training time, while at the same time obtaining accuracy improvements of up to 2.2 BLEU

    Discourse Structure in Machine Translation Evaluation

    Full text link
    In this article, we explore the potential of using sentence-level discourse structure for machine translation evaluation. We first design discourse-aware similarity measures, which use all-subtree kernels to compare discourse parse trees in accordance with the Rhetorical Structure Theory (RST). Then, we show that a simple linear combination with these measures can help improve various existing machine translation evaluation metrics regarding correlation with human judgments both at the segment- and at the system-level. This suggests that discourse information is complementary to the information used by many of the existing evaluation metrics, and thus it could be taken into account when developing richer evaluation metrics, such as the WMT-14 winning combined metric DiscoTKparty. We also provide a detailed analysis of the relevance of various discourse elements and relations from the RST parse trees for machine translation evaluation. In particular we show that: (i) all aspects of the RST tree are relevant, (ii) nuclearity is more useful than relation type, and (iii) the similarity of the translation RST tree to the reference tree is positively correlated with translation quality.Comment: machine translation, machine translation evaluation, discourse analysis. Computational Linguistics, 201
    • …
    corecore