Temporal action localization (TAL), which involves recognizing and locating
action instances, is a challenging task in video understanding. Most existing
approaches directly predict action classes and regress offsets to boundaries,
while overlooking the discrepant importance of each frame. In this paper, we
propose an Action Sensitivity Learning framework (ASL) to tackle this task,
which aims to assess the value of each frame and then leverage the generated
action sensitivity to recalibrate the training procedure. We first introduce a
lightweight Action Sensitivity Evaluator to learn the action sensitivity at the
class level and instance level, respectively. The outputs of the two branches
are combined to reweight the gradient of the two sub-tasks. Moreover, based on
the action sensitivity of each frame, we design an Action Sensitive Contrastive
Loss to enhance features, where the action-aware frames are sampled as positive
pairs to push away the action-irrelevant frames. The extensive studies on
various action localization benchmarks (i.e., MultiThumos, Charades,
Ego4D-Moment Queries v1.0, Epic-Kitchens 100, Thumos14 and ActivityNet1.3) show
that ASL surpasses the state-of-the-art in terms of average-mAP under multiple
types of scenarios, e.g., single-labeled, densely-labeled and egocentric.Comment: Accepted to ICCV 202