In reading comprehension, generating sentence-level distractors is a
significant task, which requires a deep understanding of the article and
question. The traditional entity-centered methods can only generate word-level
or phrase-level distractors. Although recently proposed neural-based methods
like sequence-to-sequence (Seq2Seq) model show great potential in generating
creative text, the previous neural methods for distractor generation ignore two
important aspects. First, they didn't model the interactions between the
article and question, making the generated distractors tend to be too general
or not relevant to question context. Second, they didn't emphasize the
relationship between the distractor and article, making the generated
distractors not semantically relevant to the article and thus fail to form a
set of meaningful options. To solve the first problem, we propose a
co-attention enhanced hierarchical architecture to better capture the
interactions between the article and question, thus guide the decoder to
generate more coherent distractors. To alleviate the second problem, we add an
additional semantic similarity loss to push the generated distractors more
relevant to the article. Experimental results show that our model outperforms
several strong baselines on automatic metrics, achieving state-of-the-art
performance. Further human evaluation indicates that our generated distractors
are more coherent and more educative compared with those distractors generated
by baselines.Comment: 8 pages, 3 figures. Accepted by AAAI202