8 research outputs found
A Fast and Accurate Iris Localization Technique for Healthcare Security System
yesIn the health care systems, a high security level is
required to protect extremely sensitive patient records. The goal
is to provide a secure access to the right records at the right time
with high patient privacy. As the most accurate biometric system,
the iris recognition can play a significant role in healthcare
applications for accurate patient identification. In this paper, the
corner stone towards building a fast and robust iris recognition
system for healthcare applications is addressed, which is known
as iris localization. Iris localization is an essential step for
efficient iris recognition systems. The presence of extraneous
features such as eyelashes, eyelids, pupil and reflection spots
make the correct iris localization challenging. In this paper, an
efficient and automatic method is presented for the inner and
outer iris boundary localization. The inner pupil boundary is
detected after eliminating specular reflections using a
combination of thresholding and morphological operations.
Then, the outer iris boundary is detected using the modified
Circular Hough transform. An efficient preprocessing procedure
is proposed to enhance the iris boundary by applying 2D
Gaussian filter and Histogram equalization processes. In
addition, the pupil’s parameters (e.g. radius and center
coordinates) are employed to reduce the search time of the
Hough transform by discarding the unnecessary edge points
within the iris region. Finally, a robust and fast eyelids detection
algorithm is developed which employs an anisotropic diffusion
filter with Radon transform to fit the upper and lower eyelids
boundaries. The performance of the proposed method is tested
on two databases: CASIA Version 1.0 and SDUMLA-HMT iris
database. The Experimental results demonstrate the efficiency of
the proposed method. Moreover, a comparative study with other
established methods is also carried out
The impact of collarette region-based convolutional neural network for iris recognition
Iris recognition is a biometric technique that reliably and quickly recognizes a person by their iris based on unique biological characteristics. Iris has an exceptional structure and it provides very rich feature spaces as freckles, stripes, coronas, zigzag collarette area, etc. It has many features where its growing interest in biometric recognition lies. This paper proposes an improved iris recognition method for person identification based on Convolutional Neural Networks (CNN) with an improved recognition rate based on a contribution on zigzag collarette area - the area surrounding the pupil - recognition. Our work is in the field of biometrics especially iris recognition; the iris recognition rate using the full circle of the zigzag collarette was compared with the detection rate using the lower semicircle of the zigzag collarette. The classification of the collarette is based on the Alex-Net model to learn this feature, the use of the couple (collarette/CNN) allows for noiseless and more targeted characterization and also an automatic extraction of the lower semicircle of the collarette region, finally, the SVM training model is used for classification using grayscale eye image data taken from (CASIA-iris-V4) database. The experimental results show that our contribution proves to be the best accurate, because the CNN can effectively extract the image features with higher classification accuracy and because our new method, which uses the lower semicircle of the collarette region, achieved the highest recognition accuracy compared with the old methods that use the full circle of collarette region
Personal Authentication System Based Iris Recognition with Digital Signature Technology
Authentication based on biometrics is being used to prevent physical access to high-security institutions. Recently, due to the rapid rise of information system technologies, Biometrics are now being used in applications for accessing databases and commercial workflow systems. These applications need to implement measures to counter security threats. Many developers are exploring and developing novel authentication techniques to prevent these attacks. However, the most difficult problem is how to keep biometric data while maintaining the practical performance of identity verification systems. This paper presents a biometrics-based personal authentication system in which a smart card, a Public Key Infrastructure (PKI), and iris verification technologies are combined. Raspberry Pi 4 Model B+ is used as the core of hardware components with an IR Camera. Following that idea, we designed an optimal image processing algorithm in OpenCV/ Python, Keras, and sci-kit learn libraries for feature extraction and recognition is chosen for application development in this project. The implemented system gives an accuracy of (97% and 100%) for the left and right (NTU) iris datasets respectively after training. Later, the person verification based on the iris feature is performed to verify the claimed identity and examine the system authentication. The time of key generation, Signature, and Verification is 5.17sec,0.288, and 0.056 respectively for the NTU iris dataset. This work offers the realistic architecture to implement identity-based cryptography with biometrics using the RSA algorithm
Evaluation of the Parameters Involved in the Iris Recognition System
Biometric recognition is an automatic identification method which is based on unique features or characteristics possessed by human beings and Iris recognition has proved itself as one of the most reliable biometric methods available owing to the accuracy provided by its unique epigenetic patterns. The main steps in any iris recognition system are image acquisition, iris segmentation, iris normalization, feature extraction and features matching. EER (Equal Error Rate) metric is considered the best metric for evaluating an iris recognition system.In this paper, different parameters viz. the scaling factor to speed up the CHT (Circle Hough Transform), the sigma for blurring with Gaussian filter while detecting edges, the radius for weak edge suppression for the edge detector used during segmentation and the gamma correction factor for gamma correction; the central wavelength for convolving with Log-Gabor filter and the sigma upon central frequency during feature extraction have been thoroughly tested and evaluated over the CASIA-IrisV1 database to get an improved parameter set. This paper demonstrates how the parameters must be set to have an optimized Iris Recognition System
A multi-biometric iris recognition system based on a deep learning approach
YesMultimodal biometric systems have been widely
applied in many real-world applications due to its ability to
deal with a number of significant limitations of unimodal
biometric systems, including sensitivity to noise, population
coverage, intra-class variability, non-universality, and
vulnerability to spoofing. In this paper, an efficient and
real-time multimodal biometric system is proposed based
on building deep learning representations for images of
both the right and left irises of a person, and fusing the
results obtained using a ranking-level fusion method. The
trained deep learning system proposed is called IrisConvNet
whose architecture is based on a combination of Convolutional
Neural Network (CNN) and Softmax classifier to
extract discriminative features from the input image without
any domain knowledge where the input image represents
the localized iris region and then classify it into one of N
classes. In this work, a discriminative CNN training scheme
based on a combination of back-propagation algorithm and
mini-batch AdaGrad optimization method is proposed for
weights updating and learning rate adaptation, respectively.
In addition, other training strategies (e.g., dropout method,
data augmentation) are also proposed in order to evaluate
different CNN architectures. The performance of the proposed
system is tested on three public datasets collected
under different conditions: SDUMLA-HMT, CASIA-Iris-
V3 Interval and IITD iris databases. The results obtained
from the proposed system outperform other state-of-the-art
of approaches (e.g., Wavelet transform, Scattering transform,
Local Binary Pattern and PCA) by achieving a Rank-1 identification rate of 100% on all the employed databases
and a recognition time less than one second per person
Recommended from our members
Novel medical imaging technologies for processing epithelium and endothelium layers in corneal confocal images. Developing automated segmentation and quantification algorithms for processing sub-basal epithelium nerves and endothelial cells for early diagnosis of diabetic neuropathy in corneal confocal microscope images
Diabetic Peripheral Neuropathy (DPN) is one of the most common types of diabetes that can affect the cornea. An accurate analysis of the corneal epithelium nerve structures and the corneal endothelial cell can assist early diagnosis of this disease and other corneal diseases, which can lead to visual impairment and then to blindness. In this thesis, fully-automated segmentation and quantification algorithms for processing and analysing sub-basal epithelium nerves and endothelial cells are proposed for early diagnosis of diabetic neuropathy in Corneal Confocal Microscopy (CCM) images. Firstly, a fully automatic nerve segmentation system for corneal confocal microscope images is proposed. The performance of the proposed system is evaluated against manually traced images with an execution time of the prototype is 13 seconds. Secondly, an automatic corneal nerve registration system is proposed. The main aim of this system is to produce a new informative corneal image that contains structural and functional information. Thirdly, an automated real-time system, termed the Corneal Endothelium Analysis System (CEAS) is developed and applied for the segmentation of endothelial cells in images of human cornea obtained by In Vivo CCM. The performance of the proposed CEAS system was tested against manually traced images with an execution time of only 6 seconds per image. Finally, the results obtained from all the proposed approaches have been evaluated and validated by an expert advisory board from two institutes, they are the Division of Medicine, Weill Cornell Medicine-Qatar, Doha, Qatar and the Manchester Royal Eye Hospital, Centre for Endocrinology and Diabetes, UK
Bioelectrical User Authentication
There has been tremendous growth of mobile devices, which includes mobile phones, tablets etc. in recent years. The use of mobile phone is more prevalent due to their increasing functionality and capacity. Most of the mobile phones available now are smart phones and better processing capability hence their deployment for processing large volume of information. The information contained in these smart phones need to be protected against unauthorised persons from getting hold of personal data. To verify a legitimate user before accessing the phone information, the user authentication mechanism should be robust enough to meet present security challenge. The present approach for user authentication is cumbersome and fails to consider the human factor. The point of entry mechanism is intrusive which forces users to authenticate always irrespectively of the time interval. The use of biometric is identified as a more reliable method for implementing a transparent and non-intrusive user authentication. Transparent authentication using biometrics provides the opportunity for more convenient and secure authentication over secret-knowledge or token-based approaches. The ability to apply biometrics in a transparent manner improves the authentication security by providing a reliable way for smart phone user authentication. As such, research is required to investigate new modalities that would easily operate within the constraints of a continuous and transparent authentication system. This thesis explores the use of bioelectrical signals and contextual information for non-intrusive approach for authenticating a user of a mobile device. From fusion of bioelectrical signals and context awareness information, three algorithms where created to discriminate subjects with overall Equal Error Rate (EER of 3.4%, 2.04% and 0.27% respectively. Based vii | P a g e on the analysis from the multi-algorithm implementation, a novel architecture is proposed using a multi-algorithm biometric authentication system for authentication a user of a smart phone. The framework is designed to be continuous, transparent with the application of advanced intelligence to further improve the authentication result. With the proposed framework, it removes the inconvenience of password/passphrase etc. memorability, carrying of token or capturing a biometric sample in an intrusive manner. The framework is evaluated through simulation with the application of a voting scheme. The simulation of the voting scheme using majority voting improved to the performance of the combine algorithm (security level 2) to FRR of 22% and FAR of 0%, the Active algorithm (security level 2) to FRR of 14.33% and FAR of 0% while the Non-active algorithm (security level 3) to FRR of 10.33% and FAR of 0%
Recommended from our members
A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques.
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Higher Committee for Education Development in Ira