3,291 research outputs found
Learning Multimodal Word Representation via Dynamic Fusion Methods
Multimodal models have been proven to outperform text-based models on
learning semantic word representations. Almost all previous multimodal models
typically treat the representations from different modalities equally. However,
it is obvious that information from different modalities contributes
differently to the meaning of words. This motivates us to build a multimodal
model that can dynamically fuse the semantic representations from different
modalities according to different types of words. To that end, we propose three
novel dynamic fusion methods to assign importance weights to each modality, in
which weights are learned under the weak supervision of word association pairs.
The extensive experiments have demonstrated that the proposed methods
outperform strong unimodal baselines and state-of-the-art multimodal models.Comment: To be appear in AAAI-1
Multimodal music information processing and retrieval: survey and future challenges
Towards improving the performance in various music information processing
tasks, recent studies exploit different modalities able to capture diverse
aspects of music. Such modalities include audio recordings, symbolic music
scores, mid-level representations, motion, and gestural data, video recordings,
editorial or cultural tags, lyrics and album cover arts. This paper critically
reviews the various approaches adopted in Music Information Processing and
Retrieval and highlights how multimodal algorithms can help Music Computing
applications. First, we categorize the related literature based on the
application they address. Subsequently, we analyze existing information fusion
approaches, and we conclude with the set of challenges that Music Information
Retrieval and Sound and Music Computing research communities should focus in
the next years
- …