67 research outputs found
Flow-Attention-based Spatio-Temporal Aggregation Network for 3D Mask Detection
Anti-spoofing detection has become a necessity for face recognition systems
due to the security threat posed by spoofing attacks. Despite great success in
traditional attacks, most deep-learning-based methods perform poorly in 3D
masks, which can highly simulate real faces in appearance and structure,
suffering generalizability insufficiency while focusing only on the spatial
domain with single frame input. This has been mitigated by the recent
introduction of a biomedical technology called rPPG (remote
photoplethysmography). However, rPPG-based methods are sensitive to noisy
interference and require at least one second (> 25 frames) of observation time,
which induces high computational overhead. To address these challenges, we
propose a novel 3D mask detection framework, called FASTEN
(Flow-Attention-based Spatio-Temporal aggrEgation Network). We tailor the
network for focusing more on fine-grained details in large movements, which can
eliminate redundant spatio-temporal feature interference and quickly capture
splicing traces of 3D masks in fewer frames. Our proposed network contains
three key modules: 1) a facial optical flow network to obtain non-RGB
inter-frame flow information; 2) flow attention to assign different
significance to each frame; 3) spatio-temporal aggregation to aggregate
high-level spatial features and temporal transition features. Through extensive
experiments, FASTEN only requires five frames of input and outperforms eight
competitors for both intra-dataset and cross-dataset evaluations in terms of
multiple detection metrics. Moreover, FASTEN has been deployed in real-world
mobile devices for practical 3D mask detection.Comment: 13 pages, 5 figures. Accepted to NeurIPS 202
Explainable and Interpretable Face Presentation Attack Detection Methods
Decision support systems based on machine learning (ML) techniques are excelling in most artificial intelligence (AI) fields, over-performing other AI methods, as well as humans. However, challenges still exist that do not favour the dominance of AI in some applications. This proposal focuses on a critical one: lack of transparency and explainability, reducing trust and accountability of an AI system. The fact that most AI methods still operate as complex black boxes, makes the inner processes which sustain their predictions still unattainable. The awareness around these observations foster the need to regulate many sensitive domains where AI has been applied in order to interpret, explain and audit the reliability of the ML based systems.
Although modern-day biometric recognition (BR) systems are already benefiting from the performance gains achieved with AI (which can account for and learn subtle changes in the person to be authenticated or statistical mismatches between samples), it is still in the dark ages of black box models, without reaping the benefits of the mismatches between samples), it is still in the dark ages of black box models, without reaping the benefits of the XAI field. This work will focus on studying AI explainability in the field of biometrics focusing in particular use cases in BR, such as verification/ identification of individuals and liveness detection (LD) (aka, antispoofing).
The main goals of this work are: i) to become acquainted with the state-of-the-art in explainability and biometric recognition and PAD methods; ii) to develop an experimental work xxxxx
Tasks 1st semester
(1) Study of the state of the art- bibliography review on state of the art for presentation attack detection
(2) Get acquainted with the previous work of the group in the topic
(3) Data preparation and data pre-processing
(3) Define the experimental protocol, including performance metrics
(4) Perform baseline experiments
(5) Write monography
Tasks 2nd semester
(1) Update on the state of the art
(2) Data preparation and data pre-processing
(3) Propose and implement a methodology for interpretability in biometrics
(4) Evaluation of the performance and comparison with baseline and state of the art approaches
(5) Dissertation writing
Referências bibliográficas principais: (*)
[Doshi17] B. Kim and F. Doshi-Velez, "Interpretable machine learning: The fuss, the concrete and the questions," 2017
[Mol19] Christoph Molnar. Interpretable Machine Learning. 2019
[Sei18] C. Seibold, W. Samek, A. Hilsmann, and P. Eisert, "Accurate and robust neural networks for security related applications exampled by face morphing attacks," arXiv preprint arXiv:1806.04265, 2018
[Seq20] Sequeira, Ana F., João T. Pinto, Wilson Silva, Tiago Gonçalves and Cardoso, Jaime S., "Interpretable Biometrics: Should We Rethink How Presentation Attack Detection is Evaluated?", 8th IWBF2020
[Wilson18] W. Silva, K. Fernandes, M. J. Cardoso, and J. S. Cardoso, "Towards complementary explanations using deep neural networks," in Understanding and Interpreting Machine Learning in MICA. Springer, 2018
[Wilson19] W. Silva, K. Fernandes, and J. S. Cardoso, "How to produce complementary explanations using an Ensemble Model," in IJCNN. 2019
[Wilson19A] W. Silva, M. J. Cardoso, and J. S. Cardoso, "Image captioning as a proxy for Explainable Decisions" in Understanding and Interpreting Machine Learning in MICA, 2019 (Submitted
- …