3 research outputs found
Static and Dynamic Fusion for Multi-modal Cross-ethnicity Face Anti-spoofing
Regardless of the usage of deep learning and handcrafted methods, the dynamic
information from videos and the effect of cross-ethnicity are rarely considered
in face anti-spoofing. In this work, we propose a static-dynamic fusion
mechanism for multi-modal face anti-spoofing. Inspired by motion divergences
between real and fake faces, we incorporate the dynamic image calculated by
rank pooling with static information into a conventional neural network (CNN)
for each modality (i.e., RGB, Depth and infrared (IR)). Then, we develop a
partially shared fusion method to learn complementary information from multiple
modalities. Furthermore, in order to study the generalization capability of the
proposal in terms of cross-ethnicity attacks and unknown spoofs, we introduce
the largest public cross-ethnicity Face Anti-spoofing (CASIA-CeFA) dataset,
covering 3 ethnicities, 3 modalities, 1607 subjects, and 2D plus 3D attack
types. Experiments demonstrate that the proposed method achieves
state-of-the-art results on CASIA-CeFA, CASIA-SURF, OULU-NPU and SiW.Comment: 10 pages, 9 figures, conferenc
Creating Artificial Modalities to Solve RGB Liveness
Special cameras that provide useful features for face anti-spoofing are
desirable, but not always an option. In this work we propose a method to
utilize the difference in dynamic appearance between bona fide and spoof
samples by creating artificial modalities from RGB videos. We introduce two
types of artificial transforms: rank pooling and optical flow, combined in
end-to-end pipeline for spoof detection. We demonstrate that using intermediate
representations that contain less identity and fine-grained features increase
model robustness to unseen attacks as well as to unseen ethnicities. The
proposed method achieves state-of-the-art on the largest cross-ethnicity face
anti-spoofing dataset CASIA-SURF CeFA (RGB).Comment: CVPRW202
Deep convolutional neural networks for face and iris presentation attack detection: Survey and case study
Biometric presentation attack detection is gaining increasing attention.
Users of mobile devices find it more convenient to unlock their smart
applications with finger, face or iris recognition instead of passwords. In
this paper, we survey the approaches presented in the recent literature to
detect face and iris presentation attacks. Specifically, we investigate the
effectiveness of fine tuning very deep convolutional neural networks to the
task of face and iris antispoofing. We compare two different fine tuning
approaches on six publicly available benchmark datasets. Results show the
effectiveness of these deep models in learning discriminative features that can
tell apart real from fake biometric images with very low error rate.
Cross-dataset evaluation on face PAD showed better generalization than state of
the art. We also performed cross-dataset testing on iris PAD datasets in terms
of equal error rate which was not reported in literature before. Additionally,
we propose the use of a single deep network trained to detect both face and
iris attacks. We have not noticed accuracy degradation compared to networks
trained for only one biometric separately. Finally, we analyzed the learned
features by the network, in correlation with the image frequency components, to
justify its prediction decision.Comment: A preprint of a paper accepted by IET Biometrics journal and is
subject to Institution of Engineering and Technology Copyrigh