2 research outputs found
Learning Shared Semantic Space with Correlation Alignment for Cross-modal Event Retrieval
In this paper, we propose to learn shared semantic space with correlation
alignment () for multimodal data representations, which aligns
nonlinear correlations of multimodal data distributions in deep neural networks
designed for heterogeneous data. In the context of cross-modal (event)
retrieval, we design a neural network with convolutional layers and
fully-connected layers to extract features for images, including images on
Flickr-like social media. Simultaneously, we exploit a fully-connected neural
network to extract semantic features for texts, including news articles from
news media. In particular, nonlinear correlations of layer activations in the
two neural networks are aligned with correlation alignment during the joint
training of the networks. Furthermore, we project the multimodal data into a
shared semantic space for cross-modal (event) retrieval, where the distances
between heterogeneous data samples can be measured directly. In addition, we
contribute a Wiki-Flickr Event dataset, where the multimodal data samples are
not describing each other in pairs like the existing paired datasets, but all
of them are describing semantic events. Extensive experiments conducted on both
paired and unpaired datasets manifest the effectiveness of ,
outperforming the state-of-the-art methods.Comment: 22 pages, submitted to ACM Transactions on Multimedia Computing
Communications and Applications(ACM TOMM
MMED: A Multi-domain and Multi-modality Event Dataset
In this work, we construct and release a multi-domain and multi-modality
event dataset (MMED), containing 25,165 textual news articles collected from
hundreds of news media sites (e.g., Yahoo News, Google News, CNN News.) and
76,516 image posts shared on Flickr social media, which are annotated according
to 412 real-world events. The dataset is collected to explore the problem of
organizing heterogeneous data contributed by professionals and amateurs in
different data domains, and the problem of transferring event knowledge
obtained from one data domain to heterogeneous data domain, thus summarizing
the data with different contributors. We hope that the release of the MMED
dataset can stimulate innovate research on related challenging problems, such
as event discovery, cross-modal (event) retrieval, and visual question
answering, etc