8 research outputs found
A multimodal deep learning framework using local feature representations for face recognition
YesThe most recent face recognition systems are
mainly dependent on feature representations obtained using
either local handcrafted-descriptors, such as local binary patterns
(LBP), or use a deep learning approach, such as deep
belief network (DBN). However, the former usually suffers
from the wide variations in face images, while the latter
usually discards the local facial features, which are proven
to be important for face recognition. In this paper, a novel
framework based on merging the advantages of the local
handcrafted feature descriptors with the DBN is proposed to
address the face recognition problem in unconstrained conditions.
Firstly, a novel multimodal local feature extraction
approach based on merging the advantages of the Curvelet
transform with Fractal dimension is proposed and termed
the Curvelet–Fractal approach. The main motivation of this
approach is that theCurvelet transform, a newanisotropic and
multidirectional transform, can efficiently represent themain
structure of the face (e.g., edges and curves), while the Fractal
dimension is one of the most powerful texture descriptors
for face images. Secondly, a novel framework is proposed,
termed the multimodal deep face recognition (MDFR)framework,
to add feature representations by training aDBNon top
of the local feature representations instead of the pixel intensity
representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary
to those acquired by the Curvelet–Fractal approach.
Finally, the performance of the proposed approaches has
been evaluated by conducting a number of extensive experiments
on four large-scale face datasets: the SDUMLA-HMT,
FERET, CAS-PEAL-R1, and LFW databases. The results
obtained from the proposed approaches outperform other
state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by
achieving new state-of-the-art results on all the employed
datasets
Personal Authentication System Based Iris Recognition with Digital Signature Technology
Authentication based on biometrics is being used to prevent physical access to high-security institutions. Recently, due to the rapid rise of information system technologies, Biometrics are now being used in applications for accessing databases and commercial workflow systems. These applications need to implement measures to counter security threats. Many developers are exploring and developing novel authentication techniques to prevent these attacks. However, the most difficult problem is how to keep biometric data while maintaining the practical performance of identity verification systems. This paper presents a biometrics-based personal authentication system in which a smart card, a Public Key Infrastructure (PKI), and iris verification technologies are combined. Raspberry Pi 4 Model B+ is used as the core of hardware components with an IR Camera. Following that idea, we designed an optimal image processing algorithm in OpenCV/ Python, Keras, and sci-kit learn libraries for feature extraction and recognition is chosen for application development in this project. The implemented system gives an accuracy of (97% and 100%) for the left and right (NTU) iris datasets respectively after training. Later, the person verification based on the iris feature is performed to verify the claimed identity and examine the system authentication. The time of key generation, Signature, and Verification is 5.17sec,0.288, and 0.056 respectively for the NTU iris dataset. This work offers the realistic architecture to implement identity-based cryptography with biometrics using the RSA algorithm
Deep Regularized Discriminative Network
Traditional linear discriminant analysis (LDA) approach discards the eigenvalues which are very small or equivalent to zero, but quite often eigenvectors corresponding to zero eigenvalues are the important dimensions for discriminant analysis. We propose an objective function which would utilize both the principal as well as nullspace eigenvalues and simultaneously inherit the class separability information onto its latent space representation. The idea is to build a convolutional neural network (CNN) and perform the regularized discriminant analysis on top of this and train it in an end-to-end fashion. The backpropagation is performed with a suitable optimizer to update the parameters so that the whole CNN approach minimizes the within class variance and maximizes the total class variance information suitable for both multi-class and binary class classification problems. Experimental results on four databases for multiple computer vision classification tasks show the efficacy of our proposed approach as compared to other popular methods
Recommended from our members
Fundus-DeepNet: Multi-Label Deep Learning Classification System for Enhanced Detection of Multiple Ocular Diseases through Data Fusion of Fundus Images
YesDetecting multiple ocular diseases in fundus images is crucial in ophthalmic diagnosis. This study introduces the Fundus-DeepNet system, an automated multi-label deep learning classification system designed to identify multiple ocular diseases by integrating feature representations from pairs of fundus images (e.g., left and right eyes). The study initiates with a comprehensive image pre-processing procedure, including circular border cropping, image resizing, contrast enhancement, noise removal, and data augmentation. Subsequently, discriminative deep feature representations are extracted using multiple deep learning blocks, namely the High-Resolution Network (HRNet) and Attention Block, which serve as feature descriptors. The SENet Block is then applied to further enhance the quality and robustness of feature representations from a pair of fundus images, ultimately consolidating them into a single feature representation. Finally, a sophisticated classification model, known as a Discriminative Restricted Boltzmann Machine (DRBM), is employed. By incorporating a Softmax layer, this DRBM is adept at generating a probability distribution that specifically identifies eight different ocular diseases. Extensive experiments were conducted on the challenging Ophthalmic Image Analysis-Ocular Disease Intelligent Recognition (OIA-ODIR) dataset, comprising diverse fundus images depicting eight different ocular diseases. The Fundus-DeepNet system demonstrated F1-scores, Kappa scores, AUC, and final scores of 88.56%, 88.92%, 99.76%, and 92.41% in the off-site test set, and 89.13%, 88.98%, 99.86%, and 92.66% in the on-site test set.In summary, the Fundus-DeepNet system exhibits outstanding proficiency in accurately detecting multiple ocular diseases, offering a promising solution for early diagnosis and treatment in ophthalmology.European Union under the REFRESH – Research Excellence for Region Sustainability and High-tech Industries project number CZ.10.03.01/00/22_003/0000048 via the Operational Program Just Transition. The Ministry of Education, Youth, and Sports of the Czech Republic - Technical University of Ostrava, Czechia under Grants SP2023/039 and SP2023/042
Recommended from our members
Novel medical imaging technologies for processing epithelium and endothelium layers in corneal confocal images. Developing automated segmentation and quantification algorithms for processing sub-basal epithelium nerves and endothelial cells for early diagnosis of diabetic neuropathy in corneal confocal microscope images
Diabetic Peripheral Neuropathy (DPN) is one of the most common types of diabetes that can affect the cornea. An accurate analysis of the corneal epithelium nerve structures and the corneal endothelial cell can assist early diagnosis of this disease and other corneal diseases, which can lead to visual impairment and then to blindness. In this thesis, fully-automated segmentation and quantification algorithms for processing and analysing sub-basal epithelium nerves and endothelial cells are proposed for early diagnosis of diabetic neuropathy in Corneal Confocal Microscopy (CCM) images. Firstly, a fully automatic nerve segmentation system for corneal confocal microscope images is proposed. The performance of the proposed system is evaluated against manually traced images with an execution time of the prototype is 13 seconds. Secondly, an automatic corneal nerve registration system is proposed. The main aim of this system is to produce a new informative corneal image that contains structural and functional information. Thirdly, an automated real-time system, termed the Corneal Endothelium Analysis System (CEAS) is developed and applied for the segmentation of endothelial cells in images of human cornea obtained by In Vivo CCM. The performance of the proposed CEAS system was tested against manually traced images with an execution time of only 6 seconds per image. Finally, the results obtained from all the proposed approaches have been evaluated and validated by an expert advisory board from two institutes, they are the Division of Medicine, Weill Cornell Medicine-Qatar, Doha, Qatar and the Manchester Royal Eye Hospital, Centre for Endocrinology and Diabetes, UK
Recommended from our members
A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques.
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Higher Committee for Education Development in Ira