2 research outputs found
Image Retrieval with Mixed Initiative and Multimodal Feedback
How would you search for a unique, fashionable shoe that a friend wore and
you want to buy, but you didn't take a picture? Existing approaches propose
interactive image search as a promising venue. However, they either entrust the
user with taking the initiative to provide informative feedback, or give all
control to the system which determines informative questions to ask. Instead,
we propose a mixed-initiative framework where both the user and system can be
active participants, depending on whose initiative will be more beneficial for
obtaining high-quality search results. We develop a reinforcement learning
approach which dynamically decides which of three interaction opportunities to
give to the user: drawing a sketch, providing free-form attribute feedback, or
answering attribute-based questions. By allowing these three options, our
system optimizes both the informativeness and exploration capabilities allowing
faster image retrieval. We outperform three baselines on three datasets and
extensive experimental settings.Comment: In submission to BMVC 201