5 research outputs found

    Building Trustworthy AI for Biometrics

    Get PDF
    In the recent past, face recognition and Eye Authentication (EA) have been widely used for biometric authentication, especially in mission critical applications like surveillance, security, border patrol etc. Since the introduction of Deep Convolutional Neural Networks (DCNNs), the accuracy of face recognition and eye authentication algorithms has significantly increased. The improvement in this technology has led to its usage in a larger number of applications. However, these networks have demonstrated several issues related to bias in terms of sensitive attributes (such as gender, skintone etc.) and are also susceptible to privacy leakage and spoofing attacks from malicious agents. Therefore, in this dissertation, we investigate the trustworthiness of DCNN-based models used in biometric authentication and propose techniques to improve the same. In the context of face-based authentication, (i) we present an approach for evaluating the reliability of deep features in performing face verification. We term this reliability measure as ‘iconicity’. (ii) We study the implicit encoding of sensitive attribute information in face recognition features, extracted from different layers of a previously trained network. (iii) We present an adversarial approach to reduce the implicit encoding of sensitive attributes in features extracted from a pre-trained network. This helps us reduce the gender and skintone bias demonstrated by such features. (iv) We also propose a non-adversarial, distillation-based to mitigate bias, while maintaining reasonable face verification accuracy. For eye authentication, (v) we present a distillation-based approach to make eye authentication networks resilient to presentation attacks (spoof). Finally, since two of our proposed methods use the vanilla knowledge distillation (vi) we present an attention-based mechanism to improve the knowledge transfer in a typical distillation step
    corecore