4 research outputs found
Facial micro-expressions grand challenge 2018 summary
Abstract
This paper summarises the Facial Micro-Expression Grand Challenge (MEGC 2018) held in conjunction with the 13th IEEE Conference on Automatic Face and Gesture Recognition (FG) 2018. In this workshop, we aim to stimulate new ideas and techniques for facial micro-expression analysis by proposing a new cross-database challenge. Two state-of-the-art datasets, CASME II and SAMM, are used to validate the performance of existing and new algorithms. Also, the challenge advocates the recognition of micro-expressions based on AU-centric objective classes rather than emotional classes. We present a summary and analysis of the baseline results using LBP-TOP, HOOF and 3DHOG, together with results from the challenge submissions
MEGC 2019:the second facial micro-expressions grand challenge
Abstract
Automatic facial micro-expression (ME) analysis is a growing field of research that has gained much attention in the last five years. With many recent works testing on limited data, there is a need to spur better approaches that are both robust and effective. This paper summarises the 2nd Facial Micro-Expression Grand Challenge (MEGC 2019) held in conjunction with the 14th IEEE Conference on Automatic Face and Gesture Recognition (FG) 2019. In this workshop, we proposed challenges for two micro-expression (ME) tasks- spotting and recognition, with the aim of encouraging rigorous evaluation and development of new robust techniques that can accommodate data captured across a variety of settings. In this paper, we outline the evaluation protocols for the two challenge tasks, the datasets involved, and an analysis of the best performing works from the participating teams, together with a summary of results. Finally, we highlight some possible future directions
MEGC2020:the third facial micro-expression grand challenge
Abstract
The recent emergence of automatic facial micro-expression analysis has attracted a lot of attention in the last five years. Compared to the advances made in micro-expression recognition, the task of micro-expression spotting from long videos is tremendously in need of more effective methods. This paper summarises the 3rd Facial Micro-Expression Grand Challenge (MEGC 2020) held in conjunction with the 15th IEEE Conference on Automatic Face and Gesture Recognition (FG) 2020. In this workshop, we propose a new challenge of spotting both macro- and micro-expressions from long videos, to spur the community to develop new techniques for micro-expression spotting and also to extend facial micro-expression analysis to more complex real-world scenarios where micro-expressions are likely to be intertwined among normal expressions. In this paper, we outline the evaluation protocols for the challenge task, and describe the datasets involved. Then, we summarize the methods from the accepted challenge papers, present the comparison and analysis of results, as well as future directions
Why is the winner the best?
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The "typical" lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work