124 research outputs found
Question Dependent Recurrent Entity Network for Question Answering
Question Answering is a task which requires building models capable of
providing answers to questions expressed in human language. Full question
answering involves some form of reasoning ability. We introduce a neural
network architecture for this task, which is a form of , that
recognizes entities and their relations to answers through a focus attention
mechanism. Our model is named
and extends by exploiting aspects of the question
during the memorization process. We validate the model on both synthetic and
real datasets: the question answering dataset and the $CNN\ \&\ Daily\
Newsreading\ comprehension$ dataset. In our experiments, the models achieved
a State-of-The-Art in the former and competitive results in the latter.Comment: 14 page
Personalizing Dialogue Agents via Meta-Learning
Existing personalized dialogue models use human designed persona descriptions
to improve dialogue consistency. Collecting such descriptions from existing
dialogues is expensive and requires hand-crafted feature designs. In this
paper, we propose to extend Model-Agnostic Meta-Learning (MAML)(Finn et al.,
2017) to personalized dialogue learning without using any persona descriptions.
Our model learns to quickly adapt to new personas by leveraging only a few
dialogue samples collected from the same user, which is fundamentally different
from conditioning the response on the persona descriptions. Empirical results
on Persona-chat dataset (Zhang et al., 2018) indicate that our solution
outperforms non-meta-learning baselines using automatic evaluation metrics, and
in terms of human-evaluated fluency and consistency.Comment: Accepted in ACL 2019. Zhaojiang Lin* and Andrea Madotto* contributed
equally to this wor
- …