3 research outputs found
A multi-biometric iris recognition system based on a deep learning approach
YesMultimodal biometric systems have been widely
applied in many real-world applications due to its ability to
deal with a number of significant limitations of unimodal
biometric systems, including sensitivity to noise, population
coverage, intra-class variability, non-universality, and
vulnerability to spoofing. In this paper, an efficient and
real-time multimodal biometric system is proposed based
on building deep learning representations for images of
both the right and left irises of a person, and fusing the
results obtained using a ranking-level fusion method. The
trained deep learning system proposed is called IrisConvNet
whose architecture is based on a combination of Convolutional
Neural Network (CNN) and Softmax classifier to
extract discriminative features from the input image without
any domain knowledge where the input image represents
the localized iris region and then classify it into one of N
classes. In this work, a discriminative CNN training scheme
based on a combination of back-propagation algorithm and
mini-batch AdaGrad optimization method is proposed for
weights updating and learning rate adaptation, respectively.
In addition, other training strategies (e.g., dropout method,
data augmentation) are also proposed in order to evaluate
different CNN architectures. The performance of the proposed
system is tested on three public datasets collected
under different conditions: SDUMLA-HMT, CASIA-Iris-
V3 Interval and IITD iris databases. The results obtained
from the proposed system outperform other state-of-the-art
of approaches (e.g., Wavelet transform, Scattering transform,
Local Binary Pattern and PCA) by achieving a Rank-1 identification rate of 100% on all the employed databases
and a recognition time less than one second per person
Biometric interoperability across training, enrollment, and testing for face authentication
The paper focuses on personally identifiable information (PII) and interoperability. The emphasis is on quantitative performance analysis and validation for uncontrolled operational settings and image quality, and variable demographics and gallery composition. Biometric face authentication involves three distinct operational stages, those of face space learning (training), gallery enrollment, and testing (querying). The authentication tasks considered here are face identification and verification. Performance evaluation involves k-fold cross-validation using both PCA and LDA for face space representation. Our basic findings indicate that (a) training to learn the face space is less important than the quality of images during enrollment and testing;(b) exclusion of first eigenvectors in defining the face space improves performance particularly for the PCA face space and lesser quality data; (c) the size of the subject gallery affects performance; and (d) it does not make much difference if the face space is derived from biometric data coming from the same dataset source as that used for enrollment and testing. Possible solutions to enhance overall performance and cope with adversarial (impostor) behavior during mass screening include non-inductive learning settings, e.g., transduction and transfer learning, using both labeled and unlabeled examples. © 2012 IEEE