4 research outputs found
Forcing the Whole Video as Background: An Adversarial Learning Strategy for Weakly Temporal Action Localization
With video-level labels, weakly supervised temporal action localization
(WTAL) applies a localization-by-classification paradigm to detect and classify
the action in untrimmed videos. Due to the characteristic of classification,
class-specific background snippets are inevitably mis-activated to improve the
discriminability of the classifier in WTAL. To alleviate the disturbance of
background, existing methods try to enlarge the discrepancy between action and
background through modeling background snippets with pseudo-snippet-level
annotations, which largely rely on artificial hypotheticals. Distinct from the
previous works, we present an adversarial learning strategy to break the
limitation of mining pseudo background snippets. Concretely, the background
classification loss forces the whole video to be regarded as the background by
a background gradient reinforcement strategy, confusing the recognition model.
Reversely, the foreground(action) loss guides the model to focus on action
snippets under such conditions. As a result, competition between the two
classification losses drives the model to boost its ability for action
modeling. Simultaneously, a novel temporal enhancement network is designed to
facilitate the model to construct temporal relation of affinity snippets based
on the proposed strategy, for further improving the performance of action
localization. Finally, extensive experiments conducted on THUMOS14 and
ActivityNet1.2 demonstrate the effectiveness of the proposed method.Comment: 9 pages, 5 figures, conferenc