2,742 research outputs found

    Privacy-Preserving Facial Recognition Using Biometric-Capsules

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)In recent years, developers have used the proliferation of biometric sensors in smart devices, along with recent advances in deep learning, to implement an array of biometrics-based recognition systems. Though these systems demonstrate remarkable performance and have seen wide acceptance, they present unique and pressing security and privacy concerns. One proposed method which addresses these concerns is the elegant, fusion-based Biometric-Capsule (BC) scheme. The BC scheme is provably secure, privacy-preserving, cancellable and interoperable in its secure feature fusion design. In this work, we demonstrate that the BC scheme is uniquely fit to secure state-of-the-art facial verification, authentication and identification systems. We compare the performance of unsecured, underlying biometrics systems to the performance of the BC-embedded systems in order to directly demonstrate the minimal effects of the privacy-preserving BC scheme on underlying system performance. Notably, we demonstrate that, when seamlessly embedded into a state-of-the-art FaceNet and ArcFace verification systems which achieve accuracies of 97.18% and 99.75% on the benchmark LFW dataset, the BC-embedded systems are able to achieve accuracies of 95.13% and 99.13% respectively. Furthermore, we also demonstrate that the BC scheme outperforms or performs as well as several other proposed secure biometric methods

    Letā€™s Face It: The effect of orthognathic surgery on facial recognition algorithm analysis

    Get PDF
    Aim: To evaluate the ability of a publicly available facial recognition application program interface (API) to calculate similarity scores for pre- and post-surgical photographs of patients undergoing orthognathic surgeries. Our primary objective was to identify which surgical procedure(s) had the greatest effect(s) on similarity score. Methods: Standard treatment progress photographs for 25 retrospectively identified, orthodontic-orthognathic patients were analyzed using the API to calculate similarity scores between the pre- and post-surgical photographs. Photographs from two pre-surgical timepoints were compared as controls. Both relaxed and smiling photographs were included in the study to assess for the added impact of facial pose on similarity score. Surgical procedure(s) performed on each patient, gender, age at time of surgery, and ethnicity were recorded for statistical analysis. Nonparametric Kruskal-Wallis Rank Sum Tests were performed to univariately analyze the relationship between each categorical patient characteristic and each recognition score. Multiple comparison Wilcoxon Rank Sum Tests were performed on the subsequent statistically significant characteristics. P-Values were adjusted for using the Bonferroni correction technique. Results: Patients that had surgery on both jaws had a lower median similarity score, when comparing relaxed expressions before and after surgery, compared to those that had surgery only on the mandible (p = 0.014). It was also found that patients receiving LeFort and bilateral sagittal split osteotomies (BSSO) surgeries had a lower median similarity score compared to those that received only BSSO (p = 0.009). For the score comparing relaxed expressions before surgery versus smiling expressions after surgery, patients receiving two-jaw surgeries had lower scores than those that had surgery on only the mandible (p = 0.028). Patients that received LeFort and BSSO surgeries were also found to have lower similarity scores compared to patients that received only BSSO when comparing pre-surgical relaxed photographs to post-surgical smiling photographs (p = 0.036). Conclusions: Two-jaw surgeries were associated with a statistically significant decrease in similarity score when compared to one-jaw procedures. Pose was also found to be a factor influencing similarity scores, especially when comparing pre-surgical relaxed photographs to post-surgical smiling photographs

    Design and implementation of a multi-modal biometric system for company access control

    Get PDF
    This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive checking with a minimum cooperation from the subject, while iris supports very accurate recognition procedure at a higher grade of invasivity. The recognition of the face trait is based on the Local Binary Patterns histograms, and the Daughman\u2019s method is implemented for the analysis of the iris data. The recognition process may require either the acquisition of the user\u2019s face only or the serial acquisition of both the user\u2019s face and iris, depending on the confidence level of the decision with respect to the set of security levels and requirements, stated in a formal way in the Service Level Agreement at a negotiation phase. The quality of the decision depends on the setting of proper different thresholds in the decision modules for the two biometric traits. Any time the quality of the decision is not good enough, the system activates proper rules, which ask for new acquisitions (and decisions), possibly with different threshold values, resulting in a system not with a fixed and predefined behaviour, but one which complies with the actual acquisition context. Rules are formalized as deduction rules and grouped together to represent \u201cresponse behaviors\u201d according to the previous analysis. Therefore, there are different possible working flows, since the actual response of the recognition process depends on the output of the decision making modules that compose the system. Finally, the deployment phase is described, together with the results from the testing, based on the AT&T Face Database and the UBIRIS database

    Assessing the Effectiveness of Automated Emotion Recognition in Adults and Children for Clinical Investigation

    Get PDF
    Recent success stories in automated object or face recognition, partly fuelled by deep learning artiļ¬cial neural network (ANN) architectures, has led to the advancement of biometric research platforms and, to some extent, the resurrection of Artiļ¬cial Intelligence (AI). In line with this general trend, inter-disciplinary approaches have taken place to automate the recognition of emotions in adults or children for the beneļ¬t of various applications such as identiļ¬cation of children emotions prior to a clinical investigation. Within this context, it turns out that automating emotion recognition is far from being straight forward with several challenges arising for both science(e.g., methodology underpinned by psychology) and technology (e.g., iMotions biometric research platform). In this paper, we present a methodology, experiment and interesting ļ¬ndings, which raise the following research questions for the recognition of emotions and attention in humans: a) adequacy of well-established techniques such as the International Affective Picture System (IAPS), b) adequacy of state-of-the-art biometric research platforms, c) the extent to which emotional responses may be different among children or adults. Our ļ¬ndings and ļ¬rst attempts to answer some of these research questions, are all based on a mixed sample of adults and children, who took part in the experiment resulting into a statistical analysis of numerous variables. These are related with, both automatically and interactively, captured responses of participants to a sample of IAPS pictures
    • ā€¦
    corecore