3,247 research outputs found
Are You Talking to Me? Reasoned Visual Dialog Generation through Adversarial Learning
The Visual Dialogue task requires an agent to engage in a conversation about
an image with a human. It represents an extension of the Visual Question
Answering task in that the agent needs to answer a question about an image, but
it needs to do so in light of the previous dialogue that has taken place. The
key challenge in Visual Dialogue is thus maintaining a consistent, and natural
dialogue while continuing to answer questions correctly. We present a novel
approach that combines Reinforcement Learning and Generative Adversarial
Networks (GANs) to generate more human-like responses to questions. The GAN
helps overcome the relative paucity of training data, and the tendency of the
typical MLE-based approach to generate overly terse answers. Critically, the
GAN is tightly integrated into the attention mechanism that generates
human-interpretable reasons for each answer. This means that the discriminative
model of the GAN has the task of assessing whether a candidate answer is
generated by a human or not, given the provided reason. This is significant
because it drives the generative model to produce high quality answers that are
well supported by the associated reasoning. The method also generates the
state-of-the-art results on the primary benchmark
A Differentiable Generative Adversarial Network for Open Domain Dialogue
Paper presented at the IWSDS 2019: International Workshop on Spoken Dialogue Systems Technology, Siracusa, Italy, April 24-26, 2019This work presents a novel methodology to train open domain neural dialogue systems within the framework of Generative Adversarial Networks with gradient-based optimization methods. We avoid the non-differentiability related to text-generating networks approximating the word vector corresponding to each generated token via a top-k softmax. We show that a weighted average of the word vectors of the most probable tokens computed from the probabilities resulting of the top-k softmax leads to a good approximation of the word vector of the generated token. Finally we demonstrate through a human evaluation process that training a neural dialogue system via adversarial learning with this method successfully discourages it from producing generic responses. Instead it tends to produce more informative and variate ones.This work has been partially funded by the Basque Government under grant PRE_2017_1_0357, by the University of the Basque Country UPV/EHU under grant PIF17/310, and by the H2020 RIA EMPATHIC (Grant N: 769872)
Reinforcement Learning for Generative AI: A Survey
Deep Generative AI has been a long-standing essential topic in the machine
learning community, which can impact a number of application areas like text
generation and computer vision. The major paradigm to train a generative model
is maximum likelihood estimation, which pushes the learner to capture and
approximate the target data distribution by decreasing the divergence between
the model distribution and the target distribution. This formulation
successfully establishes the objective of generative tasks, while it is
incapable of satisfying all the requirements that a user might expect from a
generative model. Reinforcement learning, serving as a competitive option to
inject new training signals by creating new objectives that exploit novel
signals, has demonstrated its power and flexibility to incorporate human
inductive bias from multiple angles, such as adversarial learning,
hand-designed rules and learned reward model to build a performant model.
Thereby, reinforcement learning has become a trending research field and has
stretched the limits of generative AI in both model design and application. It
is reasonable to summarize and conclude advances in recent years with a
comprehensive review. Although there are surveys in different application areas
recently, this survey aims to shed light on a high-level review that spans a
range of application areas. We provide a rigorous taxonomy in this area and
make sufficient coverage on various models and applications. Notably, we also
surveyed the fast-developing large language model area. We conclude this survey
by showing the potential directions that might tackle the limit of current
models and expand the frontiers for generative AI
- …