3,016 research outputs found
Temporal Sentence Grounding in Videos: A Survey and Future Directions
Temporal sentence grounding in videos (TSGV), \aka natural language video
localization (NLVL) or video moment retrieval (VMR), aims to retrieve a
temporal moment that semantically corresponds to a language query from an
untrimmed video. Connecting computer vision and natural language, TSGV has
drawn significant attention from researchers in both communities. This survey
attempts to provide a summary of fundamental concepts in TSGV and current
research status, as well as future research directions. As the background, we
present a common structure of functional components in TSGV, in a tutorial
style: from feature extraction from raw video and language query, to answer
prediction of the target moment. Then we review the techniques for multimodal
understanding and interaction, which is the key focus of TSGV for effective
alignment between the two modalities. We construct a taxonomy of TSGV
techniques and elaborate the methods in different categories with their
strengths and weaknesses. Lastly, we discuss issues with the current TSGV
research and share our insights about promising research directions.Comment: 29 pages, 32 figures, 9 table
Counterfactual Cross-modality Reasoning for Weakly Supervised Video Moment Localization
Video moment localization aims to retrieve the target segment of an untrimmed
video according to the natural language query. Weakly supervised methods gains
attention recently, as the precise temporal location of the target segment is
not always available. However, one of the greatest challenges encountered by
the weakly supervised method is implied in the mismatch between the video and
language induced by the coarse temporal annotations. To refine the
vision-language alignment, recent works contrast the cross-modality
similarities driven by reconstructing masked queries between positive and
negative video proposals. However, the reconstruction may be influenced by the
latent spurious correlation between the unmasked and the masked parts, which
distorts the restoring process and further degrades the efficacy of contrastive
learning since the masked words are not completely reconstructed from the
cross-modality knowledge. In this paper, we discover and mitigate this spurious
correlation through a novel proposed counterfactual cross-modality reasoning
method. Specifically, we first formulate query reconstruction as an aggregated
causal effect of cross-modality and query knowledge. Then by introducing
counterfactual cross-modality knowledge into this aggregation, the spurious
impact of the unmasked part contributing to the reconstruction is explicitly
modeled. Finally, by suppressing the unimodal effect of masked query, we can
rectify the reconstructions of video proposals to perform reasonable
contrastive learning. Extensive experimental evaluations demonstrate the
effectiveness of our proposed method. The code is available at
\href{https://github.com/sLdZ0306/CCR}{https://github.com/sLdZ0306/CCR}.Comment: Accepted by ACM MM 202
DiffusionVMR: Diffusion Model for Video Moment Retrieval
Video moment retrieval is a fundamental visual-language task that aims to
retrieve target moments from an untrimmed video based on a language query.
Existing methods typically generate numerous proposals manually or via
generative networks in advance as the support set for retrieval, which is not
only inflexible but also time-consuming. Inspired by the success of diffusion
models on object detection, this work aims at reformulating video moment
retrieval as a denoising generation process to get rid of the inflexible and
time-consuming proposal generation. To this end, we propose a novel
proposal-free framework, namely DiffusionVMR, which directly samples random
spans from noise as candidates and introduces denoising learning to ground
target moments. During training, Gaussian noise is added to the real moments,
and the model is trained to learn how to reverse this process. In inference, a
set of time spans is progressively refined from the initial noise to the final
output. Notably, the training and inference of DiffusionVMR are decoupled, and
an arbitrary number of random spans can be used in inference without being
consistent with the training phase. Extensive experiments conducted on three
widely-used benchmarks (i.e., QVHighlight, Charades-STA, and TACoS) demonstrate
the effectiveness of the proposed DiffusionVMR by comparing it with
state-of-the-art methods
Towards Video Anomaly Retrieval from Video Anomaly Detection: New Benchmarks and Model
Video anomaly detection (VAD) has been paid increasing attention due to its
potential applications, its current dominant tasks focus on online detecting
anomalies% at the frame level, which can be roughly interpreted as the binary
or multiple event classification. However, such a setup that builds
relationships between complicated anomalous events and single labels, e.g.,
``vandalism'', is superficial, since single labels are deficient to
characterize anomalous events. In reality, users tend to search a specific
video rather than a series of approximate videos. Therefore, retrieving
anomalous events using detailed descriptions is practical and positive but few
researches focus on this. In this context, we propose a novel task called Video
Anomaly Retrieval (VAR), which aims to pragmatically retrieve relevant
anomalous videos by cross-modalities, e.g., language descriptions and
synchronous audios. Unlike the current video retrieval where videos are assumed
to be temporally well-trimmed with short duration, VAR is devised to retrieve
long untrimmed videos which may be partially relevant to the given query. To
achieve this, we present two large-scale VAR benchmarks, UCFCrime-AR and
XDViolence-AR, constructed on top of prevalent anomaly datasets. Meanwhile, we
design a model called Anomaly-Led Alignment Network (ALAN) for VAR. In ALAN, we
propose an anomaly-led sampling to focus on key segments in long untrimmed
videos. Then, we introduce an efficient pretext task to enhance semantic
associations between video-text fine-grained representations. Besides, we
leverage two complementary alignments to further match cross-modal contents.
Experimental results on two benchmarks reveal the challenges of VAR task and
also demonstrate the advantages of our tailored method.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
Cross-Sentence Temporal and Semantic Relations in Video Activity Localisation
Video activity localisation has recently attained increasing attention due to its practical values in automatically localising the most salient visual segments corresponding to their language descriptions (sentences) from untrimmed and unstructured videos. For supervised model training, a temporal annotation of both the start and end time index of each video segment for a sentence (a video moment) must be given. This is not only very expensive but also sensitive to ambiguity and subjective annotation bias, a much harder task than image labelling. In this work, we develop a more accurate weakly-supervised solution by introducing Cross-Sentence Relations Mining (CRM) in video moment proposal generation and matching when only a paragraph description of activities without per-sentence temporal annotation is available. Specifically, we explore two cross-sentence relational constraints: (1) Temporal ordering and (2) semantic consistency among sentences in a paragraph description of video activities. Existing weakly-supervised techniques only consider within-sentence video segment correlations in training without considering cross-sentence paragraph context. This can mislead due to ambiguous expressions of individual sentences with visually indiscriminate video moment proposals in isolation. Experiments on two publicly available activity localisation datasets show the advantages of our approach over the state-of-the-art weakly supervised methods, especially so when the video activity descriptions become more complex
- …