3 research outputs found

    Mouth Image Based Person Authentication Using DWLSTM and GRU

    Get PDF
    Recently several classification methods were introduced to solve mouth based biometric authentication systems. The results of previous investigations into mouth prints are insufficient and produce lesser authentication results. This is mainly due to the difficulties that accompany any analysis of the mouths: mouths are very flexible and pliable, and successive mouth print impressions even those obtained from the same person may significantly differ from one other. The existing machine learning methods, may not achieve higher performance and only few methods are available using deep learning for mouth biometric authentication. The use of deep learning based mouth biometrics authentication gives higher results than usual machine learning methods. The proposed mouth based biometric authentication (MBBA) system is rigorously examined with real world data and challenges with the purpose that could be expected on mouth-based solution deployed on a mobile device. The proposed system has three major steps such as (1) database collection, (2) creating model for authentication, (3) performance evaluation. The database is collected from Annamalai University deep learning laboratory which consists of 5000 video frames belongs to 10 persons. The person authentication model is created using divergence weight long short term memory (DWLSTM) and gated recurrent unit (GRU) to capture the temporal relationship in mouth images of a person. The existing and proposed methods are implemented via the Anaconda with Jupyter notebook. Finally the results of the proposed model are compared against existing methods such as support vector machine (SVM), and Probabilistic Neural Network (PNN) with respect to metrics like precision, recall, F1-score, and accuracy of mouth

    Real-world human gender classification from oral region using convolutional neural netwrok

    Get PDF
    Gender classification is an important biometric task. It has been widely studied in the literature. Face modality is the most studied aspect of human-gender classification. Moreover, the task has also been investigated in terms of different face components such as irises, ears, and the periocular region. In this paper, we aim to investigate gender classification based on the oral region. In the proposed approach, we adopt a convolutional neural network. For experimentation, we extracted the region of interest using the RetinaFace algorithm from the FFHQ faces dataset. We achieved acceptable results, surpassing those that use the mouth as a modality or facial sub-region in geometric approaches. The obtained results also proclaim the importance of the oral region as a facial part lost in the Covid-19 context when people wear facial mask. We suppose that the adaptation of existing facial data analysis solutions from the whole face is indispensable to keep-up their robustness
    corecore