1 research outputs found
Weakly-supervised Micro- and Macro-expression Spotting Based on Multi-level Consistency
Most micro- and macro-expression spotting methods in untrimmed videos suffer
from the burden of video-wise collection and frame-wise annotation.
Weakly-supervised expression spotting (WES) based on video-level labels can
potentially mitigate the complexity of frame-level annotation while achieving
fine-grained frame-level spotting. However, we argue that existing
weakly-supervised methods are based on multiple instance learning (MIL)
involving inter-modality, inter-sample, and inter-task gaps. The inter-sample
gap is primarily from the sample distribution and duration. Therefore, we
propose a novel and simple WES framework, MC-WES, using multi-consistency
collaborative mechanisms that include modal-level saliency, video-level
distribution, label-level duration and segment-level feature consistency
strategies to implement fine frame-level spotting with only video-level labels
to alleviate the above gaps and merge prior knowledge. The modal-level saliency
consistency strategy focuses on capturing key correlations between raw images
and optical flow. The video-level distribution consistency strategy utilizes
the difference of sparsity in temporal distribution. The label-level duration
consistency strategy exploits the difference in the duration of facial muscles.
The segment-level feature consistency strategy emphasizes that features under
the same labels maintain similarity. Experimental results on three challenging
datasets -- CAS(ME), CAS(ME), and SAMM-LV -- demonstrate that MC-WES is
comparable to state-of-the-art fully-supervised methods