22,146 research outputs found
Goal-Oriented Visual Question Generation via Intermediate Rewards
© 2018, Springer Nature Switzerland AG. Despite significant progress in a variety of vision-and-language problems, developing a method capable of asking intelligent, goal-oriented questions about images is proven to be an inscrutable challenge. Towards this end, we propose a Deep Reinforcement Learning framework based on three new intermediate rewards, namely goal-achieved, progressive and informativeness that encourage the generation of succinct questions, which in turn uncover valuable information towards the overall goal. By directly optimizing for questions that work quickly towards fulfilling the overall goal, we avoid the tendency of existing methods to generate long series of inane queries that add little value. We evaluate our model on the GuessWhat?! dataset and show that the resulting questions can help a standard ‘Guesser’ identify a specific object in an image at a much higher success rate
Improving Search through A3C Reinforcement Learning based Conversational Agent
We develop a reinforcement learning based search assistant which can assist
users through a set of actions and sequence of interactions to enable them
realize their intent. Our approach caters to subjective search where the user
is seeking digital assets such as images which is fundamentally different from
the tasks which have objective and limited search modalities. Labeled
conversational data is generally not available in such search tasks and
training the agent through human interactions can be time consuming. We propose
a stochastic virtual user which impersonates a real user and can be used to
sample user behavior efficiently to train the agent which accelerates the
bootstrapping of the agent. We develop A3C algorithm based context preserving
architecture which enables the agent to provide contextual assistance to the
user. We compare the A3C agent with Q-learning and evaluate its performance on
average rewards and state values it obtains with the virtual user in validation
episodes. Our experiments show that the agent learns to achieve higher rewards
and better states.Comment: 17 pages, 7 figure
Visual Dialogue State Tracking for Question Generation
GuessWhat?! is a visual dialogue task between a guesser and an oracle. The
guesser aims to locate an object supposed by the oracle oneself in an image by
asking a sequence of Yes/No questions. Asking proper questions with the
progress of dialogue is vital for achieving successful final guess. As a
result, the progress of dialogue should be properly represented and tracked.
Previous models for question generation pay less attention on the
representation and tracking of dialogue states, and therefore are prone to
asking low quality questions such as repeated questions. This paper proposes
visual dialogue state tracking (VDST) based method for question generation. A
visual dialogue state is defined as the distribution on objects in the image as
well as representations of objects. Representations of objects are updated with
the change of the distribution on objects. An object-difference based attention
is used to decode new question. The distribution on objects is updated by
comparing the question-answer pair and objects. Experimental results on
GuessWhat?! dataset show that our model significantly outperforms existing
methods and achieves new state-of-the-art performance. It is also noticeable
that our model reduces the rate of repeated questions from more than 50% to
21.9% compared with previous state-of-the-art methods.Comment: 8 pages, 4 figures, Accept-Oral by AAAI-202
- …