3 research outputs found
RetGen: A Joint framework for Retrieval and Grounded Text Generation Modeling
Recent advances in large-scale pre-training such as GPT-3 allow seemingly
high quality text to be generated from a given prompt. However, such generation
systems often suffer from problems of hallucinated facts, and are not
inherently designed to incorporate useful external information. Grounded
generation models appear to offer remedies, but their training typically relies
on rarely-available parallel data where information-relevant documents are
provided for context. We propose a framework that alleviates this data
constraint by jointly training a grounded generator and document retriever on
the language model signal. The model learns to reward retrieval of the
documents with the highest utility in generation, and attentively combines them
using a Mixture-of-Experts (MoE) ensemble to generate follow-on text. We
demonstrate that both generator and retriever can take advantage of this joint
training and work synergistically to produce more informative and relevant text
in both prose and dialogue generation.Comment: accepted by AAAI-22, camera ready versio