23,340 research outputs found

    FPGA IMPLEMENTATION OF RED ALGORITHM FOR HIGH SPEED PUPIL ISOLATION

    Get PDF
    Iris recognition is an automated method of biometric identification that uses mathematical pattern-recognition techniques on video images of the irises of an individual’s eyes, whose complex random patterns are unique and can be seen from some distance. Modern iris recognition algorithms can be computationally intensive, yet are designed for traditional sequential processing elements, such as a personal computer. However, a parallel processing alternative using Field Programmable Gate Array offers an opportunity to speed up iris recognition. Within the means of this project, iris template generation with directional filtering, which is a computationally expensive, yet parallel portion of a modern iris recognition algorithm, is parallelized on an FPGA system. An algorithm that is both accurate and fast in a hardware design that is small and transportable are crucial to the implementation of this tool. As part of an ongoing effort to meet these criteria, this method improves a iris recognition algorithm, namely pupil isolation. A significant speed-up of pupil isolation by implementing this portion of the algorithm on a Field Programmable Gate Array

    Occluded iris classification and segmentation using self-customized artificial intelligence models and iterative randomized Hough transform

    Get PDF
    A fast and accurate iris recognition system is presented for noisy iris images, mainly the noises due to eye occlusion and from specular reflection. The proposed recognition system will adopt a self-customized support vector machine (SVM) and convolution neural network (CNN) classification models, where the models are built according to the iris texture GLCM and automated deep features datasets that are extracted exclusively from each subject individually. The image processing techniques used were optimized, whether the processing of iris region segmentation using iterative randomized Hough transform (IRHT), or the processing of the classification, where few significant features are considered, based on singular value decomposition (SVD) analysis, for testing the moving window matrix class if it is iris or non-iris. The iris segments matching techniques are optimized by extracting, first, the largest parallel-axis rectangle inscribed in the classified occluded-iris binary image, where its corresponding iris region is crosscorrelated with the same subject’s iris reference image for obtaining the most correlated iris segments in the two eye images. Finally, calculating the iriscode Hamming distance of the two most correlated segments to identify the subject’s unique iris pattern with high accuracy, security, and reliability

    An Improved Iris Segmentation Technique Using Circular Hough Transform

    Get PDF
    It is quite easy to spoof an automated iris recognition system using fake iris such as paper print and artificial lens. False Rejection Rate (FRR) and False Acceptance Rate (FAR) of a specific approach can be as a result of noise introduced in the segmentation process. Special attention has not been paid to a modified system in which a more accurate segmentation process is applied to an already existing efficient algorithm thereby increasing the overall reliability and accuracy of iris recognition. In this work an improvement of the already existing wavelet packet decomposition for iris recognition with a Correct Classification Rate (CCR) of 98.375% is proposed. It involves changing the segmentation technique used for this implementation from the integro-differential operator approach (John Daugman’s model) to the Hough transform (Wilde’s model). This research extensively compared the two segmentation techniques to show which is better in the implementation of the wavelet packet decomposition. Implementation of the integro-differential approach to segmentation showed an accuracy of 91.39% while the Hough Transform approach showed an accuracy of 93.06%. This result indicates that the integration of the Hough Transform into any open source iris recognition module can offer as much as a 1.67% improved accuracy due to improvement in its preprocessing stage. The improved iris segmentation technique using Hough Transform has an overall CCR of 100%

    Combining multiple Iris matchers using advanced fusion techniques to enhance Iris matching performance

    Get PDF
    M.Phil. (Electrical And Electronic Engineering)The enormous increase in technology advancement and the need to secure information e ectively has led to the development and implementation of iris image acquisition technologies for automated iris recognition systems. The iris biometric is gaining popularity and is becoming a reliable and a robust modality for future biometric security. Its wide application can be extended to biometric security areas such as national ID cards, banking systems such as ATM, e-commerce, biometric passports but not applicable in forensic investigations. Iris recognition has gained valuable attention in biometric research due to the uniqueness of its textures and its high recognition rates when employed on high biometric security areas. Identity veri cation for individuals becomes a challenging task when it has to be automated with a high accuracy and robustness against spoo ng attacks and repudiation. Current recognition systems are highly a ected by noise as a result of segmentation failure, and this noise factors increase the biometric error rates such as; the FAR and the FRR. This dissertation reports an investigation of score level fusion methods which can be used to enhance iris matching performance. The fusion methods implemented in this project includes, simple sum rule, weighted sum rule fusion, minimum score and an adaptive weighted sum rule. The proposed approach uses an adaptive fusion which maps feature quality scores with the matcher. The fused scores were generated from four various iris matchers namely; the NHD matcher, the WED matcher, the WHD matcher and the POC matcher. To ensure homogeneity of matching scores before fusion, raw scores were normalized using the tanh-estimators method, because it is e cient and robust against outliers. The results were tested against two publicly available databases; namely, CASIA and UBIRIS using two statistical and biometric system measurements namely the AUC and the EER. The results of these two measures gives the AUC = 99:36% for CASIA left images, the AUC = 99:18% for CASIA right images, the AUC = 99:59% for UBIRIS database and the Equal Error Rate (EER) of 0.041 for CASIA left images, the EER = 0:087 for CASIA right images and with the EER = 0:038 for UBIRIS images

    Biometrics-as-a-Service: A Framework to Promote Innovative Biometric Recognition in the Cloud

    Full text link
    Biometric recognition, or simply biometrics, is the use of biological attributes such as face, fingerprints or iris in order to recognize an individual in an automated manner. A key application of biometrics is authentication; i.e., using said biological attributes to provide access by verifying the claimed identity of an individual. This paper presents a framework for Biometrics-as-a-Service (BaaS) that performs biometric matching operations in the cloud, while relying on simple and ubiquitous consumer devices such as smartphones. Further, the framework promotes innovation by providing interfaces for a plurality of software developers to upload their matching algorithms to the cloud. When a biometric authentication request is submitted, the system uses a criteria to automatically select an appropriate matching algorithm. Every time a particular algorithm is selected, the corresponding developer is rendered a micropayment. This creates an innovative and competitive ecosystem that benefits both software developers and the consumers. As a case study, we have implemented the following: (a) an ocular recognition system using a mobile web interface providing user access to a biometric authentication service, and (b) a Linux-based virtual machine environment used by software developers for algorithm development and submission

    MULTISCALE EDGE DETECTION USING WAVELET MAXIMA FOR IRIS LOCALIZATION

    Get PDF
    Automated personal identification based on biometrics has been receiving extensive attention over the past decade. Iris recognition, as an emerging biometric recognition approach, is becoming a very active topic in both research and practical applications and is regarded as the most reliable and accurate biometric identification system available. Common problems include variations in lighting, poor image quality, noise and interference caused by eyelashes while feature extraction and classification steps rely heavily on the rich textural details of the iris to provide a unique digital signature for an individual. As a result, the stability and integrity of a system depends on effective localization of the iris to generate the iris-code. A new localization method is presented in this paper to undertake these problems. Multiscale edge detection using wavelet maxima is discussed as a preprocessing technique that detects a precise and effective edge for localization and which greatly reduces the search space for the Hough transform, thus improving the overall performance. Linear Hough transform has been used for eyelids isolating, and an adaptive thresholding has been used for eyelashes isolating. A large number of experiments on the CASIA iris database demonstrate the validity and the effectiveness of the proposed approach

    MULTISCALE EDGE DETECTION USING WAVELET MAXIMA FOR IRIS LOCALIZATION

    Get PDF
    Automated personal identification based on biometrics has been receiving extensive attention over the past decade. Iris recognition, as an emerging biometric recognition approach, is becoming a very active topic in both research and practical applications and is regarded as the most reliable and accurate biometric identification system available. Common problems include variations in lighting, poor image quality, noise and interference caused by eyelashes while feature extraction and classification steps rely heavily on the rich textural details of the iris to provide a unique digital signature for an individual. As a result, the stability and integrity of a system depends on effective localization of the iris to generate the iris-code. A new localization method is presented in this paper to undertake these problems. Multiscale edge detection using wavelet maxima is discussed as a preprocessing technique that detects a precise and effective edge for localization and which greatly reduces the search space for the Hough transform, thus improving the overall performance. Linear Hough transform has been used for eyelids isolating, and an adaptive thresholding has been used for eyelashes isolating. A large number of experiments on the CASIA iris database demonstrate the validity and the effectiveness of the proposed approach

    Development of Robust Iris Localization and Impairment Pruning Schemes

    Get PDF
    Iris is the sphincter having flowery pattern around pupil in the eye region. The high randomness of the pattern makes iris unique for each individual and iris is identified by the scientists to be a candidate for automated machine recognition of identity of an individual. The morphogenesis of iris is completed while baby is in mother's womb; hence the iris pattern does not change throughout the span of life of a person. It makes iris one of the most reliable biometric traits. Localization of iris is the first step in iris biometric recognition system. The performance of matching is dependent on the accuracy of localization, because mislocalization would lead the next phases of biometric system to malfunction. The first part of the thesis investigates choke points of the existing localization approaches and proposes a method of devising an adaptive threshold of binarization for pupil detection. The thesis also contributes in modifying conventional integrodifferential operator based iris detection and proposes a modified version of it that uses canny detected edge map for iris detection. The other part of the thesis looks into pros and cons of the conventional global and local feature matching techniques for iris. The review of related research works on matching techniques leads to the observation that local features like Scale Invariant Feature Transform(SIFT) gives satisfactory recognition accuracy for good quality images. But the performance degrades when the images are occluded or taken non-cooperatively. As SIFT matches keypoints on the basis of 128-D local descriptors, hence it sometimes falsely pairs two keypoints which are from different portions of two iris images. Subsequently the need for filtering or pruning of faulty SIFT pairs is felt. The thesis proposes two methods of filtering impairments (faulty pairs) based on the knowledge of spatial information of the keypoints. The two proposed pruning algorithms (Angular Filtering and Scale Filtering) are applied separately and applied in union to have a complete comparative analysis of the result of matching

    Micro scalar patterning for printing ultra fine solid lines in flexographic printing process

    Get PDF
    This research focuses on the study of ultra-fine solid lines printing by using Micro-flexographic machine which is combination of flexography and micro-contact printing technique. Flexography is one of the famous and high speed roll to roll printing techniques that are possible to create graphic and electronic device on variable substrates. Micro-contact printing is a low cost technique that usually uses for micro to nano scale image especially in fine solid lines image structure. Graphene is nano material that can be used as printing ink which usually uses in producing micro to nano scale electronic devices. Lanthanum is a rare earth metal that has potential in printing industry. The combination of both printing techniques is known as Micro-flexographic printing has been successfully produced the lowest fine solid lines width and gap. The new printing technique could print fine solid lines image below 10 μm on biaxially oriented polypropylene (BOPP) substrate by using graphene as printing ink. The Micro-flexographic printing technique has been successfully printed fine solid lines with 2.6 μm width. This study also elaborates the imprint lithography process in achieving micro down to nano fine solid lines structure below 10 μm. In an additional, the lanthanum target has been successful printed on variable substrates with good surface adhesion property. This research illustrates the ultra-fine solid lines printing capability for the application of printing electronic, graphic and bio-medical
    corecore