14,638 research outputs found

    FaceQnet: Quality Assessment for Face Recognition based on Deep Learning

    Full text link
    In this paper we develop a Quality Assessment approach for face recognition based on deep learning. The method consists of a Convolutional Neural Network, FaceQnet, that is used to predict the suitability of a specific input image for face recognition purposes. The training of FaceQnet is done using the VGGFace2 database. We employ the BioLab-ICAO framework for labeling the VGGFace2 images with quality information related to their ICAO compliance level. The groundtruth quality labels are obtained using FaceNet to generate comparison scores. We employ the groundtruth data to fine-tune a ResNet-based CNN, making it capable of returning a numerical quality measure for each input image. Finally, we verify if the FaceQnet scores are suitable to predict the expected performance when employing a specific image for face recognition with a COTS face recognition system. Several conclusions can be drawn from this work, most notably: 1) we managed to employ an existing ICAO compliance framework and a pretrained CNN to automatically label data with quality information, 2) we trained FaceQnet for quality estimation by fine-tuning a pre-trained face recognition network (ResNet-50), and 3) we have shown that the predictions from FaceQnet are highly correlated with the face recognition accuracy of a state-of-the-art commercial system not used during development. FaceQnet is publicly available in GitHub.Comment: Preprint version of a paper accepted at ICB 201

    Forensic Face Recognition: A Survey

    Get PDF
    Beside a few papers which focus on the forensic aspects of automatic face recognition, there is not much published about it in contrast to the literature on developing new techniques and methodologies for biometric face recognition. In this report, we review forensic facial identification which is the forensic experts‟ way of manual facial comparison. Then we review famous works in the domain of forensic face recognition. Some of these papers describe general trends in forensics [1], guidelines for manual forensic facial comparison and training of face examiners who will be required to verify the outcome of automatic forensic face recognition system [2]. Some proposes theoretical framework for application of face recognition technology in forensics [3] and automatic forensic facial comparison [4, 5]. Bayesian framework is discussed in detail and it is elaborated how it can be adapted to forensic face recognition. Several issues related with court admissibility and reliability of system are also discussed. \ud Until now, there is no operational system available which automatically compare image of a suspect with mugshot database and provide result usable in court. The fact that biometric face recognition can in most cases be used for forensic purpose is true but the issues related to integration of technology with legal system of court still remain to be solved. There is a great need for research which is multi-disciplinary in nature and which will integrate the face recognition technology with existing legal systems. In this report we present a review of the existing literature in this domain and discuss various aspects and requirements for forensic face recognition systems particularly focusing on Bayesian framework

    A review of calibration methods for biometric systems in forensic applications

    Get PDF
    When, in a criminal case there are traces from a crime scene - e.g., finger marks or facial recordings from a surveillance camera - as well as a suspect, the judge has to accept either the hypothesis \emph{HpH_{p}} of the prosecution, stating that the trace originates from the subject, or the hypothesis of the defense \emph{HdH_d}, stating the opposite. The current practice is that forensic experts provide a degree of support for either of the two hypotheses, based on their examinations of the trace and reference data - e.g., fingerprints or photos - taken from the suspect. There is a growing interest in a more objective quantitative support for these hypotheses based on the output of biometric systems instead of manual comparison. However, the output of a score-based biometric system is not directly suitable for quantifying the evidential value contained in a trace. A suitable measure that is gradually becoming accepted in the forensic community is the Likelihood Ratio (LR) which is the ratio of the probability of evidence given \emph{HpH_p} and the probability of evidence given \emph{HdH_d}. In this paper we study and compare different score-to-LR conversion methods (called calibration methods). We include four methods in this comparative study: Kernel Density Estimation (KDE), Logistic Regression (Log Reg), Histogram Binning (HB), and Pool Adjacent Violators (PAV). Useful statistics such as mean and bias of the bootstrap distribution of \emph{LRs} for a single score value are calculated for each method varying population sizes and score location

    An Evaluation of Score Level Fusion Approaches for Fingerprint and Finger-vein Biometrics

    Get PDF
    Biometric systems have to address many requirements, such as large population coverage, demographic diversity, varied deployment environment, as well as practical aspects like performance and spoofing attacks. Traditional unimodal biometric systems do not fully meet the aforementioned requirements making them vulnerable and susceptible to different types of attacks. In response to that, modern biometric systems combine multiple biometric modalities at different fusion levels. The fused score is decisive to classify an unknown user as a genuine or impostor. In this paper, we evaluate combinations of score normalization and fusion techniques using two modalities (fingerprint and finger-vein) with the goal of identifying which one achieves better improvement rate over traditional unimodal biometric systems. The individual scores obtained from finger-veins and fingerprints are combined at score level using three score normalization techniques (min-max, z-score, hyperbolic tangent) and four score fusion approaches (minimum score, maximum score, simple sum, user weighting). The experimental results proved that the combination of hyperbolic tangent score normalization technique with the simple sum fusion approach achieve the best improvement rate of 99.98%.Comment: 10 pages, 5 figures, 3 tables, conference, NISK 201

    Deep Neural Network and Data Augmentation Methodology for off-axis iris segmentation in wearable headsets

    Full text link
    A data augmentation methodology is presented and applied to generate a large dataset of off-axis iris regions and train a low-complexity deep neural network. Although of low complexity the resulting network achieves a high level of accuracy in iris region segmentation for challenging off-axis eye-patches. Interestingly, this network is also shown to achieve high levels of performance for regular, frontal, segmentation of iris regions, comparing favorably with state-of-the-art techniques of significantly higher complexity. Due to its lower complexity, this network is well suited for deployment in embedded applications such as augmented and mixed reality headsets

    EsPRESSo: Efficient Privacy-Preserving Evaluation of Sample Set Similarity

    Full text link
    Electronic information is increasingly often shared among entities without complete mutual trust. To address related security and privacy issues, a few cryptographic techniques have emerged that support privacy-preserving information sharing and retrieval. One interesting open problem in this context involves two parties that need to assess the similarity of their datasets, but are reluctant to disclose their actual content. This paper presents an efficient and provably-secure construction supporting the privacy-preserving evaluation of sample set similarity, where similarity is measured as the Jaccard index. We present two protocols: the first securely computes the (Jaccard) similarity of two sets, and the second approximates it, using MinHash techniques, with lower complexities. We show that our novel protocols are attractive in many compelling applications, including document/multimedia similarity, biometric authentication, and genetic tests. In the process, we demonstrate that our constructions are appreciably more efficient than prior work.Comment: A preliminary version of this paper was published in the Proceedings of the 7th ESORICS International Workshop on Digital Privacy Management (DPM 2012). This is the full version, appearing in the Journal of Computer Securit
    • …
    corecore