4 research outputs found

    On Generative Adversarial Network Based Synthetic Iris Presentation Attack And Its Detection

    Get PDF
    Human iris is considered a reliable and accurate modality for biometric recognition due to its unique texture information. Reliability and accuracy of iris biometric modality have prompted its large-scale deployment for critical applications such as border control and national identification projects. The extensive growth of iris recognition systems has raised apprehensions about the susceptibility of these systems to various presentation attacks. In this thesis, a novel iris presentation attack using deep learning based synthetically generated iris images is presented. Utilizing the generative capability of deep convolutional generative adversarial networks and iris quality metrics, a new framework, named as iDCGAN is proposed for creating realistic appearing synthetic iris images. In-depth analysis is performed using quality score distributions of real and synthetically generated iris images to understand the effectiveness of the proposed approach. We also demonstrate that synthetically generated iris images can be used to attack existing iris recognition systems. As synthetically generated iris images can be effectively deployed in iris presentation attacks, it is important to develop accurate iris presentation attack detection algorithms which can distinguish such synthetic iris images from real iris images. For this purpose, a novel structural and textural feature-based iris presentation attack detection framework (DESIST) is proposed. The key emphasis of DESIST is on developing a unified framework for detecting a medley of iris presentation attacks, including synthetic iris. Experimental evaluations showcase the efficacy of the proposed DESIST framework in detecting synthetic iris presentation attacks

    Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models

    Full text link
    Advances in deep neural networks (DNNs) have shown tremendous promise in the medical domain. However, the deep learning tools that are helping the domain, can also be used against it. Given the prevalence of fraud in the healthcare domain, it is important to consider the adversarial use of DNNs in manipulating sensitive data that is crucial to patient healthcare. In this work, we present the design and implementation of a DNN-based image translation attack on biomedical imagery. More specifically, we propose Jekyll, a neural style transfer framework that takes as input a biomedical image of a patient and translates it to a new image that indicates an attacker-chosen disease condition. The potential for fraudulent claims based on such generated 'fake' medical images is significant, and we demonstrate successful attacks on both X-rays and retinal fundus image modalities. We show that these attacks manage to mislead both medical professionals and algorithmic detection schemes. Lastly, we also investigate defensive measures based on machine learning to detect images generated by Jekyll.Comment: Published in proceedings of the 5th European Symposium on Security and Privacy (EuroS&P '20

    Automatic Kinship Verification in Unconstrained Faces using Deep Learning

    Get PDF
    Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. Identifying kinship relations has also garnered interest due to several potential applications in security and surveillance and organizing and tagging the enormous number of videos being uploaded on the Internet. This dissertation has a five-fold contribution where first, a study is conducted to gain insight into the kinship verification process used by humans. Besides this, two separate deep learning based methods are proposed to solve kinship verification in images and videos. Other contributions of this research include interlinking face verification with kinship verification and creation of two kinship databases to facilitate research in this field. WVU Kinship Database is created which consists of multiple images per subject to facilitate kinship verification research. Next, kinship video (KIVI) database of more than 500 individuals with variations due to illumination, pose, occlusion, ethnicity, and expression is collected for this research. It comprises a total of 355 true kin video pairs with over 250,000 still frames. In this dissertation, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determines their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender, age, and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d′, and perceptual information entropy. Next, utilizing the information obtained from the human study, a hierarchical Kinship Verification via Representation Learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as the output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. The results show that the proposed deep learning framework (KVRL-fcDBN) yields state-of-the-art kinship verification accuracy on the WVU Kinship database and on four existing benchmark datasets. Additionally, we propose a new deep learning framework for kinship verification in unconstrained videos using a novel Supervised Mixed Norm regularization Autoencoder (SMNAE). This new autoencoder formulation introduces class-specific sparsity in the weight matrix. The proposed three-stage SMNAE based kinship verification framework utilizes the learned spatio-temporal representation in the video frames for verifying kinship in a pair of videos. The effectiveness of the proposed framework is demonstrated on the KIVI database and six existing kinship databases. On the KIVI database, SMNAE yields videobased kinship verification accuracy of 83.18% which is at least 3.2% better than existing algorithms. The algorithm is also evaluated on six publicly available kinship databases and compared with best reported results. It is observed that the proposed SMNAE consistently yields best results on all the databases. Finally, we end by discussing the connections between face verification and kinship verification research. We explore the area of self-kinship which is age-invariant face recognition. Further, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification. By addressing several problems of limited samples per kinship dataset, introducing real-world variations in unconstrained databases and designing two deep learning frameworks, this dissertation improves the understanding of kinship verification across humans and the performance of automated systems. The algorithms proposed in this research have been shown to outperform existing algorithms across six different kinship databases and has till date the best reported results in this field
    corecore