20 research outputs found
Presentation Attack detection using Wavelet Transform and Deep Residual Neural Net
Biometric authentication is becoming more prevalent for secured
authentication systems. However, the biometric substances can be deceived by
the imposters in several ways. Among other imposter attacks, print attacks,
mask attacks, and replay attacks fall under the presentation attack category.
The bio-metric images, especially the iris and face, are vulnerable to
different presentation attacks. This research applies deep learning approaches
to mitigate presentation attacks in a biometric access control system. Our
contribution in this paper is two-fold: First, we applied the wavelet transform
to extract the features from the biometric images. Second, we modified the deep
residual neural net and applied it to the spoof datasets in an attempt to
detect the presentation attacks. This research applied the proposed approach to
biometric spoof datasets, namely ATVS, CASIA two class, and CASIA cropped image
sets. The datasets used in this research contain images that are captured in
both a controlled and uncontrolled environment along with different resolutions
and sizes. We obtained the best accuracy of 93% on the ATVS Iris datasets. For
CASIA two class and CASIA cropped datasets, we achieved test accuracies of 91%
and 82%, respectively
On the Robustness of Face Recognition Algorithms Against Attacks and Bias
Face recognition algorithms have demonstrated very high recognition
performance, suggesting suitability for real world applications. Despite the
enhanced accuracies, robustness of these algorithms against attacks and bias
has been challenged. This paper summarizes different ways in which the
robustness of a face recognition algorithm is challenged, which can severely
affect its intended working. Different types of attacks such as physical
presentation attacks, disguise/makeup, digital adversarial attacks, and
morphing/tampering using GANs have been discussed. We also present a discussion
on the effect of bias on face recognition models and showcase that factors such
as age and gender variations affect the performance of modern algorithms. The
paper also presents the potential reasons for these challenges and some of the
future research directions for increasing the robustness of face recognition
models.Comment: Accepted in Senior Member Track, AAAI202
What you can't see can help you -- extended-range imaging for 3D-mask presentation attack detection
We show how imagery in NIR and LWIR bandwidths can be used to detect 2D as well as mask based presentation attacks on face-recognition systems