2 research outputs found
Hear Me Out: Fusional Approaches for Audio Augmented Temporal Action Localization
State of the art architectures for untrimmed video Temporal Action
Localization (TAL) have only considered RGB and Flow modalities, leaving the
information-rich audio modality totally unexploited. Audio fusion has been
explored for the related but arguably easier problem of trimmed (clip-level)
action recognition. However, TAL poses a unique set of challenges. In this
paper, we propose simple but effective fusion-based approaches for TAL. To the
best of our knowledge, our work is the first to jointly consider audio and
video modalities for supervised TAL. We experimentally show that our schemes
consistently improve performance for state of the art video-only TAL
approaches. Specifically, they help achieve new state of the art performance on
large-scale benchmark datasets - ActivityNet-1.3 (54.34 [email protected]) and THUMOS14
(57.18 [email protected]). Our experiments include ablations involving multiple fusion
schemes, modality combinations and TAL architectures. Our code, models and
associated data are available at https://github.com/skelemoa/tal-hmo
Modality Compensation Network: Cross-Modal Adaptation for Action Recognition
With the prevalence of RGB-D cameras, multi-modal video data have become more
available for human action recognition. One main challenge for this task lies
in how to effectively leverage their complementary information. In this work,
we propose a Modality Compensation Network (MCN) to explore the relationships
of different modalities, and boost the representations for human action
recognition. We regard RGB/optical flow videos as source modalities, skeletons
as auxiliary modality. Our goal is to extract more discriminative features from
source modalities, with the help of auxiliary modality. Built on deep
Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM) networks,
our model bridges data from source and auxiliary modalities by a modality
adaptation block to achieve adaptive representation learning, that the network
learns to compensate for the loss of skeletons at test time and even at
training time. We explore multiple adaptation schemes to narrow the distance
between source and auxiliary modal distributions from different levels,
according to the alignment of source and auxiliary data in training. In
addition, skeletons are only required in the training phase. Our model is able
to improve the recognition performance with source data when testing.
Experimental results reveal that MCN outperforms state-of-the-art approaches on
four widely-used action recognition benchmarks.Comment: Accepted by IEEE Trans. on Image Processing, 2020. Project page:
http://39.96.165.147/Projects/MCN_tip2020_ssj/MCN_tip_2020_ssj.htm