We propose a visually grounded speech model that acquires new words and their
visual depictions from just a few word-image example pairs. Given a set of test
images and a spoken query, we ask the model which image depicts the query word.
Previous work has simplified this problem by either using an artificial setting
with digit word-image pairs or by using a large number of examples per class.
We propose an approach that can work on natural word-image pairs but with less
examples, i.e. fewer shots. Our approach involves using the given word-image
example pairs to mine new unsupervised word-image training pairs from large
collections of unlabelled speech and images. Additionally, we use a
word-to-image attention mechanism to determine word-image similarity. With this
new model, we achieve better performance with fewer shots than any existing
approach.Comment: Accepted at Interspeech 202