23 research outputs found

    A Reminiscence of ”Mastermind”: Iris/Periocular Biometrics by ”In-Set” CNN Iterative Analysis

    Get PDF
    Convolutional neural networks (CNNs) have emerged as the most popular classification models in biometrics research. Under the discriminative paradigm of pattern recognition, CNNs are used typically in one of two ways: 1) verification mode (”are samples from the same person?”), where pairs of images are provided to the network to distinguish between genuine and impostor instances; and 2) identification mode (”whom is this sample from?”), where appropriate feature representations that map images to identities are found. This paper postulates a novel mode for using CNNs in biometric identification, by learning models that answer to the question ”is the query’s identity among this set?”. The insight is a reminiscence of the classical Mastermind game: by iteratively analysing the network responses when multiple random samples of k gallery elements are compared to the query, we obtain weakly correlated matching scores that - altogether - provide solid cues to infer the most likely identity. In this setting, identification is regarded as a variable selection and regularization problem, with sparse linear regression techniques being used to infer the matching probability with respect to each gallery identity. As main strength, this strategy is highly robust to outlier matching scores, which are known to be a primary error source in biometric recognition. Our experiments were carried out in full versions of two well known irises near-infrared (CASIA-IrisV4-Thousand) and periocular visible wavelength (UBIRIS.v2) datasets, and confirm that recognition performance can be solidly boosted-up by the proposed algorithm, when compared to the traditional working modes of CNNs in biometrics.info:eu-repo/semantics/publishedVersio

    Template-Driven Knowledge Distillation for Compact and Accurate Periocular Biometrics Deep-Learning Models

    Get PDF
    This work addresses the challenge of building an accurate and generalizable periocular recognition model with a small number of learnable parameters. Deeper (larger) models are typically more capable of learning complex information. For this reason, knowledge distillation (kd) was previously proposed to carry this knowledge from a large model (teacher) into a small model (student). Conventional KD optimizes the student output to be similar to the teacher output (commonly classification output). In biometrics, comparison (verification) and storage operations are conducted on biometric templates, extracted from pre-classification layers. In this work, we propose a novel template-driven KD approach that optimizes the distillation process so that the student model learns to produce templates similar to those produced by the teacher model. We demonstrate our approach on intra- and cross-device periocular verification. Our results demonstrate the superiority of our proposed approach over a network trained without KD and networks trained with conventional (vanilla) KD. For example, the targeted small model achieved an equal error rate (EER) value of 22.2% on cross-device verification without KD. The same model achieved an EER of 21.9% with the conventional KD, and only 14.7% EER when using our proposed template-driven KD

    DeepMetricEye: Metric Depth Estimation in Periocular VR Imagery

    Full text link
    Despite the enhanced realism and immersion provided by VR headsets, users frequently encounter adverse effects such as digital eye strain (DES), dry eye, and potential long-term visual impairment due to excessive eye stimulation from VR displays and pressure from the mask. Recent VR headsets are increasingly equipped with eye-oriented monocular cameras to segment ocular feature maps. Yet, to compute the incident light stimulus and observe periocular condition alterations, it is imperative to transform these relative measurements into metric dimensions. To bridge this gap, we propose a lightweight framework derived from the U-Net 3+ deep learning backbone that we re-optimised, to estimate measurable periocular depth maps. Compatible with any VR headset equipped with an eye-oriented monocular camera, our method reconstructs three-dimensional periocular regions, providing a metric basis for related light stimulus calculation protocols and medical guidelines. Navigating the complexities of data collection, we introduce a Dynamic Periocular Data Generation (DPDG) environment based on UE MetaHuman, which synthesises thousands of training images from a small quantity of human facial scan data. Evaluated on a sample of 36 participants, our method exhibited notable efficacy in the periocular global precision evaluation experiment, and the pupil diameter measurement

    Skin Lesion Classification Based on Convolutional Neural Networks

    Get PDF
    Melanoma causes the majority of skin cancer deaths. The population level of melanoma has increased over the past 30 years. It kills around 9.320 people in the US every year. Melanoma can often be found early, when it is most likely to be cured. Medical diagnoses using digital imaging with machine learning methods have become popular because of their ability to recognize patterns in digital images. Image diagnosis accuracy allows disease cured at an early stage. This paper proposes a simulation that can be used for early detection of skin cancer that can help dermatologists to distinguish melanomas from other pigmented lesions on the skin. Some researchers have developed a system using machine learning algorithms used to classify skin lesions from dermoscopy images of human skin. In this study, we proposed Convolutional Neural Network (CNN) to our model. CNN is very efficient for image processing because feature extractors can be optimized, applied to each feature image position. The results of skin lesion classification of benign nevi and melanoma based on CNN models produces high accuracy (area under the receiver operator characteristics (ROC) curve (AUC) is 92.59 %, sensitivity is 89.47%, specificity is 100.0%, precision is 100 % and F1 score is 94.44 %)
    corecore