3 research outputs found

    Deep Neural Network and Data Augmentation Methodology for off-axis iris segmentation in wearable headsets

    Full text link
    A data augmentation methodology is presented and applied to generate a large dataset of off-axis iris regions and train a low-complexity deep neural network. Although of low complexity the resulting network achieves a high level of accuracy in iris region segmentation for challenging off-axis eye-patches. Interestingly, this network is also shown to achieve high levels of performance for regular, frontal, segmentation of iris regions, comparing favorably with state-of-the-art techniques of significantly higher complexity. Due to its lower complexity, this network is well suited for deployment in embedded applications such as augmented and mixed reality headsets

    A framework for biometric recognition using non-ideal iris and face

    Get PDF
    Off-angle iris images are often captured in a non-cooperative environment. The distortion of the iris or pupil can decrease the segmentation quality as well as the data extracted thereafter. Moreover, iris with an off-angle of more than 30° can have non-recoverable features since the boundary cannot be properly localized. This usually becomes a factor of limited discriminant ability of the biometric features. Limitations also come from the noisy data arisen due to image burst, background error, or inappropriate camera pixel noise. To address the issues above, the aim of this study is to develop a framework which: (1) to improve the non-circular boundary localization, (2) to overcome the lost features, and (3) to detect and minimize the error caused by noisy data. Non-circular boundary issue is addressed through a combination of geometric calibration and direct least square ellipse that can geometrically restore, adjust, and scale up the distortion of circular shape to ellipse fitting. Further improvement comes in the form of an extraction method that combines Haar Wavelet and Neural Network to transform the iris features into wavelet coefficient representative of the relevant iris data. The non-recoverable features problem is resolved by proposing Weighted Score Level Fusion which integrates face and iris biometrics. This enhancement is done to give extra distinctive information to increase authentication accuracy rate. As for the noisy data issues, a modified Reed Solomon codes with error correction capability is proposed to decrease intra-class variations by eliminating the differences between enrollment and verification templates. The key contribution of this research is a new unified framework for high performance multimodal biometric recognition system. The framework has been tested with WVU, UBIRIS v.2, UTMIFM, ORL datasets, and achieved more than 99.8% accuracy compared to other existing methods

    Recognition of Nonideal Iris Images Using Shape Guided Approach and Game Theory

    Get PDF
    Most state-of-the-art iris recognition algorithms claim to perform with a very high recognition accuracy in a strictly controlled environment. However, their recognition accuracies significantly decrease when the acquired images are affected by different noise factors including motion blur, camera diffusion, head movement, gaze direction, camera angle, reflections, contrast, luminosity, eyelid and eyelash occlusions, and problems due to contraction and dilation. The main objective of this thesis is to develop a nonideal iris recognition system by using active contour methods, Genetic Algorithms (GAs), shape guided model, Adaptive Asymmetrical Support Vector Machines (AASVMs) and Game Theory (GT). In this thesis, the proposed iris recognition method is divided into two phases: (1) cooperative iris recognition, and (2) noncooperative iris recognition. While most state-of-the-art iris recognition algorithms have focused on the preprocessing of iris images, recently, important new directions have been identified in iris biometrics research. These include optimal feature selection and iris pattern classification. In the first phase, we propose an iris recognition scheme based on GAs and asymmetrical SVMs. Instead of using the whole iris region, we elicit the iris information between the collarette and the pupil boundary to suppress the effects of eyelid and eyelash occlusions and to minimize the matching error. In the second phase, we process the nonideal iris images that are captured in unconstrained situations and those affected by several nonideal factors. The proposed noncooperative iris recognition method is further divided into three approaches. In the first approach of the second phase, we apply active contour-based curve evolution approaches to segment the inner/outer boundaries accurately from the nonideal iris images. The proposed active contour-based approaches show a reasonable performance when the iris/sclera boundary is separated by a blurred boundary. In the second approach, we describe a new iris segmentation scheme using GT to elicit iris/pupil boundary from a nonideal iris image. We apply a parallel game-theoretic decision making procedure by modifying Chakraborty and Duncan's algorithm to form a unified approach, which is robust to noise and poor localization and less affected by weak iris/sclera boundary. Finally, to further improve the segmentation performance, we propose a variational model to localize the iris region belonging to the given shape space using active contour method, a geometric shape prior and the Mumford-Shah functional. The verification and identification performance of the proposed scheme is validated using four challenging nonideal iris datasets, namely, the ICE 2005, the UBIRIS Version 1, the CASIA Version 3 Interval, and the WVU Nonideal, plus the non-homogeneous combined dataset. We have conducted several sets of experiments and finally, the proposed approach has achieved a Genuine Accept Rate (GAR) of 97.34% on the combined dataset at the fixed False Accept Rate (FAR) of 0.001% with an Equal Error Rate (EER) of 0.81%. The highest Correct Recognition Rate (CRR) obtained by the proposed iris recognition system is 97.39%
    corecore