research

An Exploration of Neural Sequence-to-Sequence Architectures for Automatic Post-Editing

Abstract

In this work, we explore multiple neural architectures adapted for the task of automatic post-editing of machine translation output. We focus on neural end-to-end models that combine both inputs mtmt (raw MT output) and srcsrc (source language input) in a single neural architecture, modeling {mt,src}pe\{mt, src\} \rightarrow pe directly. Apart from that, we investigate the influence of hard-attention models which seem to be well-suited for monolingual tasks, as well as combinations of both ideas. We report results on data sets provided during the WMT-2016 shared task on automatic post-editing and can demonstrate that dual-attention models that incorporate all available data in the APE scenario in a single model improve on the best shared task system and on all other published results after the shared task. Dual-attention models that are combined with hard attention remain competitive despite applying fewer changes to the input.Comment: Accepted for presentation at IJCNLP 201

    Similar works