4 research outputs found
Love Thy Neighbors: Image Annotation by Exploiting Image Metadata
Some images that are difficult to recognize on their own may become more
clear in the context of a neighborhood of related images with similar
social-network metadata. We build on this intuition to improve multilabel image
annotation. Our model uses image metadata nonparametrically to generate
neighborhoods of related images using Jaccard similarities, then uses a deep
neural network to blend visual information from the image and its neighbors.
Prior work typically models image metadata parametrically, in contrast, our
nonparametric treatment allows our model to perform well even when the
vocabulary of metadata changes between training and testing. We perform
comprehensive experiments on the NUS-WIDE dataset, where we show that our model
outperforms state-of-the-art methods for multilabel image annotation even when
our model is forced to generalize to new types of metadata.Comment: Accepted to ICCV 201
A CNN-RNN Framework for Image Annotation from Visual Cues and Social Network Metadata
Images represent a commonly used form of visual communication among people.
Nevertheless, image classification may be a challenging task when dealing with
unclear or non-common images needing more context to be correctly annotated.
Metadata accompanying images on social-media represent an ideal source of
additional information for retrieving proper neighborhoods easing image
annotation task. To this end, we blend visual features extracted from neighbors
and their metadata to jointly leverage context and visual cues. Our models use
multiple semantic embeddings to achieve the dual objective of being robust to
vocabulary changes between train and test sets and decoupling the architecture
from the low-level metadata representation. Convolutional and recurrent neural
networks (CNNs-RNNs) are jointly adopted to infer similarity among neighbors
and query images. We perform comprehensive experiments on the NUS-WIDE dataset
showing that our models outperform state-of-the-art architectures based on
images and metadata, and decrease both sensory and semantic gaps to better
annotate images