124,457 research outputs found
Zero-Shot Video Moment Retrieval from Frozen Vision-Language Models
Accurate video moment retrieval (VMR) requires universal visual-textual
correlations that can handle unknown vocabulary and unseen scenes. However, the
learned correlations are likely either biased when derived from a limited
amount of moment-text data which is hard to scale up because of the prohibitive
annotation cost (fully-supervised), or unreliable when only the video-text
pairwise relationships are available without fine-grained temporal annotations
(weakly-supervised). Recently, the vision-language models (VLM) demonstrate a
new transfer learning paradigm to benefit different vision tasks through the
universal visual-textual correlations derived from large-scale vision-language
pairwise web data, which has also shown benefits to VMR by fine-tuning in the
target domains. In this work, we propose a zero-shot method for adapting
generalisable visual-textual priors from arbitrary VLM to facilitate
moment-text alignment, without the need for accessing the VMR data. To this
end, we devise a conditional feature refinement module to generate
boundary-aware visual features conditioned on text queries to enable better
moment boundary understanding. Additionally, we design a bottom-up proposal
generation strategy that mitigates the impact of domain discrepancies and
breaks down complex-query retrieval tasks into individual action retrievals,
thereby maximizing the benefits of VLM. Extensive experiments conducted on
three VMR benchmark datasets demonstrate the notable performance advantages of
our zero-shot algorithm, especially in the novel-word and novel-location
out-of-distribution setups.Comment: Accepted by WACV 202
Audio Event Detection using Weakly Labeled Data
Acoustic event detection is essential for content analysis and description of
multimedia recordings. The majority of current literature on the topic learns
the detectors through fully-supervised techniques employing strongly labeled
data. However, the labels available for majority of multimedia data are
generally weak and do not provide sufficient detail for such methods to be
employed. In this paper we propose a framework for learning acoustic event
detectors using only weakly labeled data. We first show that audio event
detection using weak labels can be formulated as an Multiple Instance Learning
problem. We then suggest two frameworks for solving multiple-instance learning,
one based on support vector machines, and the other on neural networks. The
proposed methods can help in removing the time consuming and expensive process
of manually annotating data to facilitate fully supervised learning. Moreover,
it can not only detect events in a recording but can also provide temporal
locations of events in the recording. This helps in obtaining a complete
description of the recording and is notable since temporal information was
never known in the first place in weakly labeled data.Comment: ACM Multimedia 201
- …