113 research outputs found
CentralNet: a Multilayer Approach for Multimodal Fusion
This paper proposes a novel multimodal fusion approach, aiming to produce
best possible decisions by integrating information coming from multiple media.
While most of the past multimodal approaches either work by projecting the
features of different modalities into the same space, or by coordinating the
representations of each modality through the use of constraints, our approach
borrows from both visions. More specifically, assuming each modality can be
processed by a separated deep convolutional network, allowing to take decisions
independently from each modality, we introduce a central network linking the
modality specific networks. This central network not only provides a common
feature embedding but also regularizes the modality specific networks through
the use of multi-task learning. The proposed approach is validated on 4
different computer vision tasks on which it consistently improves the accuracy
of existing multimodal fusion approaches
Multimodal Visual Concept Learning with Weakly Supervised Techniques
Despite the availability of a huge amount of video data accompanied by
descriptive texts, it is not always easy to exploit the information contained
in natural language in order to automatically recognize video concepts. Towards
this goal, in this paper we use textual cues as means of supervision,
introducing two weakly supervised techniques that extend the Multiple Instance
Learning (MIL) framework: the Fuzzy Sets Multiple Instance Learning (FSMIL) and
the Probabilistic Labels Multiple Instance Learning (PLMIL). The former encodes
the spatio-temporal imprecision of the linguistic descriptions with Fuzzy Sets,
while the latter models different interpretations of each description's
semantics with Probabilistic Labels, both formulated through a convex
optimization algorithm. In addition, we provide a novel technique to extract
weak labels in the presence of complex semantics, that consists of semantic
similarity computations. We evaluate our methods on two distinct problems,
namely face and action recognition, in the challenging and realistic setting of
movies accompanied by their screenplays, contained in the COGNIMUSE database.
We show that, on both tasks, our method considerably outperforms a
state-of-the-art weakly supervised approach, as well as other baselines.Comment: CVPR 201
Tencent AVS: A Holistic Ads Video Dataset for Multi-modal Scene Segmentation
Temporal video segmentation and classification have been advanced greatly by
public benchmarks in recent years. However, such research still mainly focuses
on human actions, failing to describe videos in a holistic view. In addition,
previous research tends to pay much attention to visual information yet ignores
the multi-modal nature of videos. To fill this gap, we construct the Tencent
`Ads Video Segmentation'~(TAVS) dataset in the ads domain to escalate
multi-modal video analysis to a new level. TAVS describes videos from three
independent perspectives as `presentation form', `place', and `style', and
contains rich multi-modal information such as video, audio, and text. TAVS is
organized hierarchically in semantic aspects for comprehensive temporal video
segmentation with three levels of categories for multi-label classification,
e.g., `place' - `working place' - `office'. Therefore, TAVS is distinguished
from previous temporal segmentation datasets due to its multi-modal
information, holistic view of categories, and hierarchical granularities. It
includes 12,000 videos, 82 classes, 33,900 segments, 121,100 shots, and 168,500
labels. Accompanied with TAVS, we also present a strong multi-modal video
segmentation baseline coupled with multi-label class prediction. Extensive
experiments are conducted to evaluate our proposed method as well as existing
representative methods to reveal key challenges of our dataset TAVS
Tackling Data Bias in MUSIC-AVQA: Crafting a Balanced Dataset for Unbiased Question-Answering
In recent years, there has been a growing emphasis on the intersection of
audio, vision, and text modalities, driving forward the advancements in
multimodal research. However, strong bias that exists in any modality can lead
to the model neglecting the others. Consequently, the model's ability to
effectively reason across these diverse modalities is compromised, impeding
further advancement. In this paper, we meticulously review each question type
from the original dataset, selecting those with pronounced answer biases. To
counter these biases, we gather complementary videos and questions, ensuring
that no answers have outstanding skewed distribution. In particular, for binary
questions, we strive to ensure that both answers are almost uniformly spread
within each question category. As a result, we construct a new dataset, named
MUSIC-AVQA v2.0, which is more challenging and we believe could better foster
the progress of AVQA task. Furthermore, we present a novel baseline model that
delves deeper into the audio-visual-text interrelation. On MUSIC-AVQA v2.0,
this model surpasses all the existing benchmarks, improving accuracy by 2% on
MUSIC-AVQA v2.0, setting a new state-of-the-art performance
- …