2 research outputs found
A Light-Weight Multimodal Framework for Improved Environmental Audio Tagging
The lack of strong labels has severely limited the state-of-the-art fully
supervised audio tagging systems to be scaled to larger dataset. Meanwhile,
audio-visual learning models based on unlabeled videos have been successfully
applied to audio tagging, but they are inevitably resource hungry and require a
long time to train. In this work, we propose a light-weight, multimodal
framework for environmental audio tagging. The audio branch of the framework is
a convolutional and recurrent neural network (CRNN) based on multiple instance
learning (MIL). It is trained with the audio tracks of a large collection of
weakly labeled YouTube video excerpts; the video branch uses pretrained
state-of-the-art image recognition networks and word embeddings to extract
information from the video track and to map visual objects to sound events.
Experiments on the audio tagging task of the DCASE 2017 challenge show that the
incorporation of video information improves a strong baseline audio tagging
system by 5.3\% absolute in terms of score. The entire system can be
trained within 6~hours on a single GPU, and can be easily carried over to other
audio tasks such as speech sentimental analysis.Comment: 5 pages, 3 figures, Accepted and to appear at ICASSP 201
An Attempt towards Interpretable Audio-Visual Video Captioning
Automatically generating a natural language sentence to describe the content
of an input video is a very challenging problem. It is an essential multimodal
task in which auditory and visual contents are equally important. Although
audio information has been exploited to improve video captioning in previous
works, it is usually regarded as an additional feature fed into a black box
fusion machine. How are the words in the generated sentences associated with
the auditory and visual modalities? The problem is still not investigated. In
this paper, we make the first attempt to design an interpretable audio-visual
video captioning network to discover the association between words in sentences
and audio-visual sequences. To achieve this, we propose a multimodal
convolutional neural network-based audio-visual video captioning framework and
introduce a modality-aware module for exploring modality selection during
sentence generation. Besides, we collect new audio captioning and visual
captioning datasets for further exploring the interactions between auditory and
visual modalities for high-level video understanding. Extensive experiments
demonstrate that the modality-aware module makes our model interpretable on
modality selection during sentence generation. Even with the added
interpretability, our video captioning network can still achieve comparable
performance with recent state-of-the-art methods.Comment: 11 pages, 4 figure