8 research outputs found

    A multimodal deep learning framework using local feature representations for face recognition

    Get PDF
    YesThe most recent face recognition systems are mainly dependent on feature representations obtained using either local handcrafted-descriptors, such as local binary patterns (LBP), or use a deep learning approach, such as deep belief network (DBN). However, the former usually suffers from the wide variations in face images, while the latter usually discards the local facial features, which are proven to be important for face recognition. In this paper, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the DBN is proposed to address the face recognition problem in unconstrained conditions. Firstly, a novel multimodal local feature extraction approach based on merging the advantages of the Curvelet transform with Fractal dimension is proposed and termed the Curvelet–Fractal approach. The main motivation of this approach is that theCurvelet transform, a newanisotropic and multidirectional transform, can efficiently represent themain structure of the face (e.g., edges and curves), while the Fractal dimension is one of the most powerful texture descriptors for face images. Secondly, a novel framework is proposed, termed the multimodal deep face recognition (MDFR)framework, to add feature representations by training aDBNon top of the local feature representations instead of the pixel intensity representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary to those acquired by the Curvelet–Fractal approach. Finally, the performance of the proposed approaches has been evaluated by conducting a number of extensive experiments on four large-scale face datasets: the SDUMLA-HMT, FERET, CAS-PEAL-R1, and LFW databases. The results obtained from the proposed approaches outperform other state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by achieving new state-of-the-art results on all the employed datasets

    Personal Authentication System Based Iris Recognition with Digital Signature Technology

    Get PDF
    Authentication based on biometrics is being used to prevent physical access to high-security institutions. Recently, due to the rapid rise of information system technologies, Biometrics are now being used in applications for accessing databases and commercial workflow systems. These applications need to implement measures to counter security threats.  Many developers are exploring and developing novel authentication techniques to prevent these attacks. However, the most difficult problem is how to keep biometric data while maintaining the practical performance of identity verification systems. This paper presents a biometrics-based personal authentication system in which a smart card, a Public Key Infrastructure (PKI), and iris verification technologies are combined. Raspberry Pi 4 Model B+ is used as the core of hardware components with an IR Camera. Following that idea, we designed an optimal image processing algorithm in OpenCV/ Python, Keras, and sci-kit learn libraries for feature extraction and recognition is chosen for application development in this project. The implemented system gives an accuracy of (97% and 100%) for the left and right (NTU) iris datasets respectively after training. Later, the person verification based on the iris feature is performed to verify the claimed identity and examine the system authentication. The time of key generation, Signature, and Verification is 5.17sec,0.288, and 0.056 respectively for the NTU iris dataset. This work offers the realistic architecture to implement identity-based cryptography with biometrics using the RSA algorithm

    Deep Regularized Discriminative Network

    Get PDF
    Traditional linear discriminant analysis (LDA) approach discards the eigenvalues which are very small or equivalent to zero, but quite often eigenvectors corresponding to zero eigenvalues are the important dimensions for discriminant analysis. We propose an objective function which would utilize both the principal as well as nullspace eigenvalues and simultaneously inherit the class separability information onto its latent space representation. The idea is to build a convolutional neural network (CNN) and perform the regularized discriminant analysis on top of this and train it in an end-to-end fashion. The backpropagation is performed with a suitable optimizer to update the parameters so that the whole CNN approach minimizes the within class variance and maximizes the total class variance information suitable for both multi-class and binary class classification problems. Experimental results on four databases for multiple computer vision classification tasks show the efficacy of our proposed approach as compared to other popular methods
    corecore