5 research outputs found
VadCLIP: Adapting Vision-Language Models for Weakly Supervised Video Anomaly Detection
The recent contrastive language-image pre-training (CLIP) model has shown
great success in a wide range of image-level tasks, revealing remarkable
ability for learning powerful visual representations with rich semantics. An
open and worthwhile problem is efficiently adapting such a strong model to the
video domain and designing a robust video anomaly detector. In this work, we
propose VadCLIP, a new paradigm for weakly supervised video anomaly detection
(WSVAD) by leveraging the frozen CLIP model directly without any pre-training
and fine-tuning process. Unlike current works that directly feed extracted
features into the weakly supervised classifier for frame-level binary
classification, VadCLIP makes full use of fine-grained associations between
vision and language on the strength of CLIP and involves dual branch. One
branch simply utilizes visual features for coarse-grained binary
classification, while the other fully leverages the fine-grained language-image
alignment. With the benefit of dual branch, VadCLIP achieves both
coarse-grained and fine-grained video anomaly detection by transferring
pre-trained knowledge from CLIP to WSVAD task. We conduct extensive experiments
on two commonly-used benchmarks, demonstrating that VadCLIP achieves the best
performance on both coarse-grained and fine-grained WSVAD, surpassing the
state-of-the-art methods by a large margin. Specifically, VadCLIP achieves
84.51% AP and 88.02% AUC on XD-Violence and UCF-Crime, respectively. Code and
features will be released to facilitate future VAD research.Comment: Submitte