39,684 research outputs found

    Modelling source- and target-language syntactic Information as conditional context in interactive neural machine translation

    Get PDF
    In interactive machine translation (MT), human translators correct errors in auto- matic translations in collaboration with the MT systems, which is seen as an effective way to improve the productivity gain in translation. In this study, we model source- language syntactic constituency parse and target-language syntactic descriptions in the form of supertags as conditional con- text for interactive prediction in neural MT (NMT). We found that the supertags significantly improve productivity gain in translation in interactive-predictive NMT (INMT), while syntactic parsing somewhat found to be effective in reducing human efforts in translation. Furthermore, when we model this source- and target-language syntactic information together as the con- ditional context, both types complement each other and our fully syntax-informed INMT model shows statistically significant reduction in human efforts for a French– to–English translation task in a reference- simulated setting, achieving 4.30 points absolute (corresponding to 9.18% relative) improvement in terms of word prediction accuracy (WPA) and 4.84 points absolute (corresponding to 9.01% relative) reduc- tion in terms of word stroke ratio (WSR) over the baseline
    corecore