2,029 research outputs found
Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives
This paper tackles the problem of reading comprehension over long narratives
where documents easily span over thousands of tokens. We propose a curriculum
learning (CL) based Pointer-Generator framework for reading/sampling over large
documents, enabling diverse training of the neural model based on the notion of
alternating contextual difficulty. This can be interpreted as a form of domain
randomization and/or generative pretraining during training. To this end, the
usage of the Pointer-Generator softens the requirement of having the answer
within the context, enabling us to construct diverse training samples for
learning. Additionally, we propose a new Introspective Alignment Layer (IAL),
which reasons over decomposed alignments using block-based self-attention. We
evaluate our proposed method on the NarrativeQA reading comprehension
benchmark, achieving state-of-the-art performance, improving existing baselines
by relative improvement on BLEU-4 and relative improvement on
Rouge-L. Extensive ablations confirm the effectiveness of our proposed IAL and
CL components.Comment: Accepted to ACL 201
Reinforcement Learning and Bandits for Speech and Language Processing: Tutorial, Review and Outlook
In recent years, reinforcement learning and bandits have transformed a wide
range of real-world applications including healthcare, finance, recommendation
systems, robotics, and last but not least, the speech and natural language
processing. While most speech and language applications of reinforcement
learning algorithms are centered around improving the training of deep neural
networks with its flexible optimization properties, there are still many
grounds to explore to utilize the benefits of reinforcement learning, such as
its reward-driven adaptability, state representations, temporal structures and
generalizability. In this survey, we present an overview of recent advancements
of reinforcement learning and bandits, and discuss how they can be effectively
employed to solve speech and natural language processing problems with models
that are adaptive, interactive and scalable.Comment: To appear in Expert Systems with Applications. Accompanying
INTERSPEECH 2022 Tutorial on the same topic. Including latest advancements in
large language models (LLMs
- …