7,005 research outputs found
BLOCK: Bilinear Superdiagonal Fusion for Visual Question Answering and Visual Relationship Detection
Multimodal representation learning is gaining more and more interest within
the deep learning community. While bilinear models provide an interesting
framework to find subtle combination of modalities, their number of parameters
grows quadratically with the input dimensions, making their practical
implementation within classical deep learning pipelines challenging. In this
paper, we introduce BLOCK, a new multimodal fusion based on the
block-superdiagonal tensor decomposition. It leverages the notion of block-term
ranks, which generalizes both concepts of rank and mode ranks for tensors,
already used for multimodal fusion. It allows to define new ways for optimizing
the tradeoff between the expressiveness and complexity of the fusion model, and
is able to represent very fine interactions between modalities while
maintaining powerful mono-modal representations. We demonstrate the practical
interest of our fusion model by using BLOCK for two challenging tasks: Visual
Question Answering (VQA) and Visual Relationship Detection (VRD), where we
design end-to-end learnable architectures for representing relevant
interactions between modalities. Through extensive experiments, we show that
BLOCK compares favorably with respect to state-of-the-art multimodal fusion
models for both VQA and VRD tasks. Our code is available at
https://github.com/Cadene/block.bootstrap.pytorch
Deep Binary Reconstruction for Cross-modal Hashing
With the increasing demand of massive multimodal data storage and
organization, cross-modal retrieval based on hashing technique has drawn much
attention nowadays. It takes the binary codes of one modality as the query to
retrieve the relevant hashing codes of another modality. However, the existing
binary constraint makes it difficult to find the optimal cross-modal hashing
function. Most approaches choose to relax the constraint and perform
thresholding strategy on the real-value representation instead of directly
solving the original objective. In this paper, we first provide a concrete
analysis about the effectiveness of multimodal networks in preserving the
inter- and intra-modal consistency. Based on the analysis, we provide a
so-called Deep Binary Reconstruction (DBRC) network that can directly learn the
binary hashing codes in an unsupervised fashion. The superiority comes from a
proposed simple but efficient activation function, named as Adaptive Tanh
(ATanh). The ATanh function can adaptively learn the binary codes and be
trained via back-propagation. Extensive experiments on three benchmark datasets
demonstrate that DBRC outperforms several state-of-the-art methods in both
image2text and text2image retrieval task.Comment: 8 pages, 5 figures, accepted by ACM Multimedia 201
- …