8 research outputs found
SCAN: Learning Hierarchical Compositional Visual Concepts
The seemingly infinite diversity of the natural world arises from a
relatively small set of coherent rules, such as the laws of physics or
chemistry. We conjecture that these rules give rise to regularities that can be
discovered through primarily unsupervised experiences and represented as
abstract concepts. If such representations are compositional and hierarchical,
they can be recombined into an exponentially large set of new concepts. This
paper describes SCAN (Symbol-Concept Association Network), a new framework for
learning such abstractions in the visual domain. SCAN learns concepts through
fast symbol association, grounding them in disentangled visual primitives that
are discovered in an unsupervised manner. Unlike state of the art multimodal
generative model baselines, our approach requires very few pairings between
symbols and images and makes no assumptions about the form of symbol
representations. Once trained, SCAN is capable of multimodal bi-directional
inference, generating a diverse set of image samples from symbolic descriptions
and vice versa. It also allows for traversal and manipulation of the implicit
hierarchy of visual concepts through symbolic instructions and learnt logical
recombination operations. Such manipulations enable SCAN to break away from its
training data distribution and imagine novel visual concepts through
symbolically instructed recombination of previously learnt concepts
Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities
Multimodal sentiment analysis is a core research area that studies speaker
sentiment expressed from the language, visual, and acoustic modalities. The
central challenge in multimodal learning involves inferring joint
representations that can process and relate information from these modalities.
However, existing work learns joint representations by requiring all modalities
as input and as a result, the learned representations may be sensitive to noisy
or missing modalities at test time. With the recent success of sequence to
sequence (Seq2Seq) models in machine translation, there is an opportunity to
explore new ways of learning joint representations that may not require all
input modalities at test time. In this paper, we propose a method to learn
robust joint representations by translating between modalities. Our method is
based on the key insight that translation from a source to a target modality
provides a method of learning joint representations using only the source
modality as input. We augment modality translations with a cycle consistency
loss to ensure that our joint representations retain maximal information from
all modalities. Once our translation model is trained with paired multimodal
data, we only need data from the source modality at test time for final
sentiment prediction. This ensures that our model remains robust from
perturbations or missing information in the other modalities. We train our
model with a coupled translation-prediction objective and it achieves new
state-of-the-art results on multimodal sentiment analysis datasets: CMU-MOSI,
ICT-MMMO, and YouTube. Additional experiments show that our model learns
increasingly discriminative joint representations with more input modalities
while maintaining robustness to missing or perturbed modalities.Comment: AAAI 2019, code available at https://github.com/hainow/MCT
Variational methods for Conditional Multimodal Deep Learning
In this paper, we address the problem of conditional modality learning, whereby one is interested in generating one modality given the other. While it is straightforward to learn a joint distribution over multiple modalities using a deep multimodal architecture, we observe that such models are not very effective at conditional generation. Hence, we address the problem by learning conditional distributions between the modalities. We use variational methods for maximizing the corresponding conditional log-likelihood. The resultant deep model, which we refer to as conditional multimodal autoencoder (CMMA), forces the latent representation obtained from a single modality alone to be `close' to the joint representation obtained from multiple modalities. We use the proposed model to generate faces from attributes. We show that the faces generated from attributes using the proposed model are qualitatively and quantitatively more representative of the attributes from which they were generated, than those obtained by other deep generative models. We also propose a secondary task, whereby the existing faces are modified by modifying the corresponding attributes. We observe that the modifications in face introduced by the proposed model are representative of the corresponding modifications in attributes