17,189 research outputs found
Neural Responding Machine for Short-Text Conversation
We propose Neural Responding Machine (NRM), a neural network-based response
generator for Short-Text Conversation. NRM takes the general encoder-decoder
framework: it formalizes the generation of response as a decoding process based
on the latent representation of the input text, while both encoding and
decoding are realized with recurrent neural networks (RNN). The NRM is trained
with a large amount of one-round conversation data collected from a
microblogging service. Empirical study shows that NRM can generate
grammatically correct and content-wise appropriate responses to over 75% of the
input text, outperforming state-of-the-arts in the same setting, including
retrieval-based and SMT-based models.Comment: accepted as a full paper at ACL 201
Reinforced Video Captioning with Entailment Rewards
Sequence-to-sequence models have shown promising improvements on the temporal
task of video captioning, but they optimize word-level cross-entropy loss
during training. First, using policy gradient and mixed-loss methods for
reinforcement learning, we directly optimize sentence-level task-based metrics
(as rewards), achieving significant improvements over the baseline, based on
both automatic metrics and human evaluation on multiple datasets. Next, we
propose a novel entailment-enhanced reward (CIDEnt) that corrects
phrase-matching based metrics (such as CIDEr) to only allow for
logically-implied partial matches and avoid contradictions, achieving further
significant improvements over the CIDEr-reward model. Overall, our
CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset.Comment: EMNLP 2017 (9 pages
Modelling source- and target-language syntactic Information as conditional context in interactive neural machine translation
In interactive machine translation (MT),
human translators correct errors in auto-
matic translations in collaboration with the
MT systems, which is seen as an effective
way to improve the productivity gain in
translation. In this study, we model source-
language syntactic constituency parse and
target-language syntactic descriptions in
the form of supertags as conditional con-
text for interactive prediction in neural
MT (NMT). We found that the supertags
significantly improve productivity gain in
translation in interactive-predictive NMT
(INMT), while syntactic parsing somewhat
found to be effective in reducing human
efforts in translation. Furthermore, when
we model this source- and target-language
syntactic information together as the con-
ditional context, both types complement
each other and our fully syntax-informed
INMT model shows statistically significant
reduction in human efforts for a French–
to–English translation task in a reference-
simulated setting, achieving 4.30 points
absolute (corresponding to 9.18% relative)
improvement in terms of word prediction
accuracy (WPA) and 4.84 points absolute
(corresponding to 9.01% relative) reduc-
tion in terms of word stroke ratio (WSR)
over the baseline
Improving the translation environment for professional translators
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side.
This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project
- …