1,278 research outputs found
VLANet: Video-Language Alignment Network for Weakly-Supervised Video Moment Retrieval
Video Moment Retrieval (VMR) is a task to localize the temporal moment in
untrimmed video specified by natural language query. For VMR, several methods
that require full supervision for training have been proposed. Unfortunately,
acquiring a large number of training videos with labeled temporal boundaries
for each query is a labor-intensive process. This paper explores methods for
performing VMR in a weakly-supervised manner (wVMR): training is performed
without temporal moment labels but only with the text query that describes a
segment of the video. Existing methods on wVMR generate multi-scale proposals
and apply query-guided attention mechanisms to highlight the most relevant
proposal. To leverage the weak supervision, contrastive learning is used which
predicts higher scores for the correct video-query pairs than for the incorrect
pairs. It has been observed that a large number of candidate proposals, coarse
query representation, and one-way attention mechanism lead to blurry attention
maps which limit the localization performance. To handle this issue,
Video-Language Alignment Network (VLANet) is proposed that learns sharper
attention by pruning out spurious candidate proposals and applying a
multi-directional attention mechanism with fine-grained query representation.
The Surrogate Proposal Selection module selects a proposal based on the
proximity to the query in the joint embedding space, and thus substantially
reduces candidate proposals which leads to lower computation load and sharper
attention. Next, the Cascaded Cross-modal Attention module considers dense
feature interactions and multi-directional attention flow to learn the
multi-modal alignment. VLANet is trained end-to-end using contrastive loss
which enforces semantically similar videos and queries to gather. The
experiments show that the method achieves state-of-the-art performance on
Charades-STA and DiDeMo datasets.Comment: 16 pages, 6 figures, European Conference on Computer Vision, 202
Temporal Sentence Grounding in Videos: A Survey and Future Directions
Temporal sentence grounding in videos (TSGV), \aka natural language video
localization (NLVL) or video moment retrieval (VMR), aims to retrieve a
temporal moment that semantically corresponds to a language query from an
untrimmed video. Connecting computer vision and natural language, TSGV has
drawn significant attention from researchers in both communities. This survey
attempts to provide a summary of fundamental concepts in TSGV and current
research status, as well as future research directions. As the background, we
present a common structure of functional components in TSGV, in a tutorial
style: from feature extraction from raw video and language query, to answer
prediction of the target moment. Then we review the techniques for multimodal
understanding and interaction, which is the key focus of TSGV for effective
alignment between the two modalities. We construct a taxonomy of TSGV
techniques and elaborate the methods in different categories with their
strengths and weaknesses. Lastly, we discuss issues with the current TSGV
research and share our insights about promising research directions.Comment: 29 pages, 32 figures, 9 table
D3G: Exploring Gaussian Prior for Temporal Sentence Grounding with Glance Annotation
Temporal sentence grounding (TSG) aims to locate a specific moment from an
untrimmed video with a given natural language query. Recently, weakly
supervised methods still have a large performance gap compared to fully
supervised ones, while the latter requires laborious timestamp annotations. In
this study, we aim to reduce the annotation cost yet keep competitive
performance for TSG task compared to fully supervised ones. To achieve this
goal, we investigate a recently proposed glance-supervised temporal sentence
grounding task, which requires only single frame annotation (referred to as
glance annotation) for each query. Under this setup, we propose a Dynamic
Gaussian prior based Grounding framework with Glance annotation (D3G), which
consists of a Semantic Alignment Group Contrastive Learning module (SA-GCL) and
a Dynamic Gaussian prior Adjustment module (DGA). Specifically, SA-GCL samples
reliable positive moments from a 2D temporal map via jointly leveraging
Gaussian prior and semantic consistency, which contributes to aligning the
positive sentence-moment pairs in the joint embedding space. Moreover, to
alleviate the annotation bias resulting from glance annotation and model
complex queries consisting of multiple events, we propose the DGA module, which
adjusts the distribution dynamically to approximate the ground truth of target
moments. Extensive experiments on three challenging benchmarks verify the
effectiveness of the proposed D3G. It outperforms the state-of-the-art weakly
supervised methods by a large margin and narrows the performance gap compared
to fully supervised methods. Code is available at
https://github.com/solicucu/D3G.Comment: ICCV202
Counterfactual Cross-modality Reasoning for Weakly Supervised Video Moment Localization
Video moment localization aims to retrieve the target segment of an untrimmed
video according to the natural language query. Weakly supervised methods gains
attention recently, as the precise temporal location of the target segment is
not always available. However, one of the greatest challenges encountered by
the weakly supervised method is implied in the mismatch between the video and
language induced by the coarse temporal annotations. To refine the
vision-language alignment, recent works contrast the cross-modality
similarities driven by reconstructing masked queries between positive and
negative video proposals. However, the reconstruction may be influenced by the
latent spurious correlation between the unmasked and the masked parts, which
distorts the restoring process and further degrades the efficacy of contrastive
learning since the masked words are not completely reconstructed from the
cross-modality knowledge. In this paper, we discover and mitigate this spurious
correlation through a novel proposed counterfactual cross-modality reasoning
method. Specifically, we first formulate query reconstruction as an aggregated
causal effect of cross-modality and query knowledge. Then by introducing
counterfactual cross-modality knowledge into this aggregation, the spurious
impact of the unmasked part contributing to the reconstruction is explicitly
modeled. Finally, by suppressing the unimodal effect of masked query, we can
rectify the reconstructions of video proposals to perform reasonable
contrastive learning. Extensive experimental evaluations demonstrate the
effectiveness of our proposed method. The code is available at
\href{https://github.com/sLdZ0306/CCR}{https://github.com/sLdZ0306/CCR}.Comment: Accepted by ACM MM 202
- …