42,223 research outputs found
A Survey of Word Reordering in Statistical Machine Translation: Computational Models and Language Phenomena
Word reordering is one of the most difficult aspects of statistical machine
translation (SMT), and an important factor of its quality and efficiency.
Despite the vast amount of research published to date, the interest of the
community in this problem has not decreased, and no single method appears to be
strongly dominant across language pairs. Instead, the choice of the optimal
approach for a new translation task still seems to be mostly driven by
empirical trials. To orientate the reader in this vast and complex research
area, we present a comprehensive survey of word reordering viewed as a
statistical modeling challenge and as a natural language phenomenon. The survey
describes in detail how word reordering is modeled within different
string-based and tree-based SMT frameworks and as a stand-alone task, including
systematic overviews of the literature in advanced reordering modeling. We then
question why some approaches are more successful than others in different
language pairs. We argue that, besides measuring the amount of reordering, it
is important to understand which kinds of reordering occur in a given language
pair. To this end, we conduct a qualitative analysis of word reordering
phenomena in a diverse sample of language pairs, based on a large collection of
linguistic knowledge. Empirical results in the SMT literature are shown to
support the hypothesis that a few linguistic facts can be very useful to
anticipate the reordering characteristics of a language pair and to select the
SMT framework that best suits them.Comment: 44 pages, to appear in Computational Linguistic
Generating Distractors for Reading Comprehension Questions from Real Examinations
We investigate the task of distractor generation for multiple choice reading
comprehension questions from examinations. In contrast to all previous works,
we do not aim at preparing words or short phrases distractors, instead, we
endeavor to generate longer and semantic-rich distractors which are closer to
distractors in real reading comprehension from examinations. Taking a reading
comprehension article, a pair of question and its correct option as input, our
goal is to generate several distractors which are somehow related to the
answer, consistent with the semantic context of the question and have some
trace in the article. We propose a hierarchical encoder-decoder framework with
static and dynamic attention mechanisms to tackle this task. Specifically, the
dynamic attention can combine sentence-level and word-level attention varying
at each recurrent time step to generate a more readable sequence. The static
attention is to modulate the dynamic attention not to focus on question
irrelevant sentences or sentences which contribute to the correct option. Our
proposed framework outperforms several strong baselines on the first prepared
distractor generation dataset of real reading comprehension questions. For
human evaluation, compared with those distractors generated by baselines, our
generated distractors are more functional to confuse the annotators.Comment: AAAI201
- …