2 research outputs found

    Expression Empowered ResiDen Network for Facial Action Unit Detection

    Full text link
    The paper explores the topic of Facial Action Unit (FAU) detection in the wild. In particular, we are interested in answering the following questions: (1) how useful are residual connections across dense blocks for face analysis? (2) how useful is the information from a network trained for categorical Facial Expression Recognition (FER) for the task of FAU detection? The proposed network (ResiDen) exploits dense blocks along with residual connections and uses auxiliary information from a FER network. The experiments are performed on the EmotionNet and DISFA datasets. The experiments show the usefulness of facial expression information for AU detection. The proposed network achieves state-of-art results on the two databases. Analysis of the results for cross database protocol shows the effectiveness of the network

    Occlusion-Adaptive Deep Network for Robust Facial Expression Recognition

    Full text link
    Recognizing the expressions of partially occluded faces is a challenging computer vision problem. Previous expression recognition methods, either overlooked this issue or resolved it using extreme assumptions. Motivated by the fact that the human visual system is adept at ignoring the occlusion and focus on non-occluded facial areas, we propose a landmark-guided attention branch to find and discard corrupted features from occluded regions so that they are not used for recognition. An attention map is first generated to indicate if a specific facial part is occluded and guide our model to attend to non-occluded regions. To further improve robustness, we propose a facial region branch to partition the feature maps into non-overlapping facial blocks and task each block to predict the expression independently. This results in more diverse and discriminative features, enabling the expression recognition system to recover even though the face is partially occluded. Depending on the synergistic effects of the two branches, our occlusion-adaptive deep network significantly outperforms state-of-the-art methods on two challenging in-the-wild benchmark datasets and three real-world occluded expression datasets
    corecore