Temporal action detection (TAD) aims to detect all action boundaries and
their corresponding categories in an untrimmed video. The unclear boundaries of
actions in videos often result in imprecise predictions of action boundaries by
existing methods. To resolve this issue, we propose a one-stage framework named
TriDet. First, we propose a Trident-head to model the action boundary via an
estimated relative probability distribution around the boundary. Then, we
analyze the rank-loss problem (i.e. instant discriminability deterioration) in
transformer-based methods and propose an efficient scalable-granularity
perception (SGP) layer to mitigate this issue. To further push the limit of
instant discriminability in the video backbone, we leverage the strong
representation capability of pretrained large models and investigate their
performance on TAD. Last, considering the adequate spatial-temporal context for
classification, we design a decoupled feature pyramid network with separate
feature pyramids to incorporate rich spatial context from the large model for
localization. Experimental results demonstrate the robustness of TriDet and its
state-of-the-art performance on multiple TAD datasets, including hierarchical
(multilabel) TAD datasets.Comment: An extended version of the CVPR paper arXiv:2303.07347, submitted to
IJC