3 research outputs found
Multi-modal palm-print and hand-vein biometric recognition at sensor level fusion
When it is important to authenticate a person based on his or her biometric qualities, most systems use a single modality (e.g. fingerprint or palm print) for further analysis at higher levels. Rather than using higher levels, this research recommends using two biometric features at the sensor level. The Log-Gabor filter is used to extract features and, as a result, recognize the pattern, because the data acquired from images is sampled at various spacing. Using the two fused modalities, the suggested system attained greater accuracy. Principal component analysis (PCA) was performed to reduce the dimensionality of the data. To get the optimum performance between the two classifiers, fusion was performed at the sensor level utilizing different classifiers, including K-nearest neighbors (K-NN) and support vector machines (SVMs). The technology collects palm prints and veins from sensors and combines them into consolidated images that take up less disk space. The amount of memory needed to store such photos has been lowered. The amount of memory is determined by the number of modalities fused
A framework for biometric recognition using non-ideal iris and face
Off-angle iris images are often captured in a non-cooperative environment. The distortion of the iris or pupil can decrease the segmentation quality as well as the data extracted thereafter. Moreover, iris with an off-angle of more than 30Ā° can have non-recoverable features since the boundary cannot be properly localized. This usually becomes a factor of limited discriminant ability of the biometric features. Limitations also come from the noisy data arisen due to image burst, background error, or inappropriate camera pixel noise. To address the issues above, the aim of this study is to develop a framework which: (1) to improve the non-circular boundary localization, (2) to overcome the lost features, and (3) to detect and minimize the error caused by noisy data. Non-circular boundary issue is addressed through a combination of geometric calibration and direct least square ellipse that can geometrically restore, adjust, and scale up the distortion of circular shape to ellipse fitting. Further improvement comes in the form of an extraction method that combines Haar Wavelet and Neural Network to transform the iris features into wavelet coefficient representative of the relevant iris data. The non-recoverable features problem is resolved by proposing Weighted Score Level Fusion which integrates face and iris biometrics. This enhancement is done to give extra distinctive information to increase authentication accuracy rate. As for the noisy data issues, a modified Reed Solomon codes with error correction capability is proposed to decrease intra-class variations by eliminating the differences between enrollment and verification templates. The key contribution of this research is a new unified framework for high performance multimodal biometric recognition system. The framework has been tested with WVU, UBIRIS v.2, UTMIFM, ORL datasets, and achieved more than 99.8% accuracy compared to other existing methods
Recommended from our members
A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques.
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the personās identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the personās identity.Higher Committee for Education Development in Ira