1 research outputs found
AU-Guided Unsupervised Domain Adaptive Facial Expression Recognition
The domain diversities including inconsistent annotation and varied image
collection conditions inevitably exist among different facial expression
recognition (FER) datasets, which pose an evident challenge for adapting the
FER model trained on one dataset to another one. Recent works mainly focus on
domain-invariant deep feature learning with adversarial learning mechanism,
ignoring the sibling facial action unit (AU) detection task which has obtained
great progress. Considering AUs objectively determine facial expressions, this
paper proposes an AU-guided unsupervised Domain Adaptive FER (AdaFER) framework
to relieve the annotation bias between different FER datasets. In AdaFER, we
first leverage an advanced model for AU detection on both source and target
domain. Then, we compare the AU results to perform AU-guided annotating, i.e.,
target faces that own the same AUs with source faces would inherit the labels
from source domain. Meanwhile, to achieve domain-invariant compact features, we
utilize an AU-guided triplet training which randomly collects
anchor-positive-negative triplets on both domains with AUs. We conduct
extensive experiments on several popular benchmarks and show that AdaFER
achieves state-of-the-art results on all these benchmarks.Comment: This is a very simple CD-FER framewor