Recent DETR-based video grounding models have made the model directly predict
moment timestamps without any hand-crafted components, such as a pre-defined
proposal or non-maximum suppression, by learning moment queries. However, their
input-agnostic moment queries inevitably overlook an intrinsic temporal
structure of a video, providing limited positional information. In this paper,
we formulate an event-aware dynamic moment query to enable the model to take
the input-specific content and positional information of the video into
account. To this end, we present two levels of reasoning: 1) Event reasoning
that captures distinctive event units constituting a given video using a slot
attention mechanism; and 2) moment reasoning that fuses the moment queries with
a given sentence through a gated fusion transformer layer and learns
interactions between the moment queries and video-sentence representations to
predict moment timestamps. Extensive experiments demonstrate the effectiveness
and efficiency of the event-aware dynamic moment queries, outperforming
state-of-the-art approaches on several video grounding benchmarks.Comment: ICCV 2023. Code is available at https://github.com/jinhyunj/EaT