3,959 research outputs found

    Optimal decision fusion and its application on 3D face recognition

    Get PDF
    Fusion is a popular practice to combine multiple classifiers or multiple modalities in biometrics. In this paper, optimal decision fusion (ODF) by AND rule and OR rule is presented. We show that the decision fusion can be done in an optimal way such that it always gives an improvement in terms of error rates over the classifiers that are fused. Both the optimal decision fusion theory and the experimental results on the FRGC 2D and 3D face data are given. Experiments show that the optimal decision fusion effectively combines the 2D texture and 3D shape information, and boosts the performance of the system

    Offline signature verification using classifier combination of HOG and LBP features

    Get PDF
    We present an offline signature verification system based on a signature’s local histogram features. The signature is divided into zones using both the Cartesian and polar coordinate systems and two different histogram features are calculated for each zone: histogram of oriented gradients (HOG) and histogram of local binary patterns (LBP). The classification is performed using Support Vector Machines (SVMs), where two different approaches for training are investigated, namely global and user-dependent SVMs. User-dependent SVMs, trained separately for each user, learn to differentiate a user’s signature from others, whereas a single global SVM trained with difference vectors of query and reference signatures’ features of all users, learns how to weight dissimilarities. The global SVM classifier is trained using genuine and forgery signatures of subjects that are excluded from the test set, while userdependent SVMs are separately trained for each subject using genuine and random forgeries. The fusion of all classifiers (global and user-dependent classifiers trained with each feature type), achieves a 15.41% equal error rate in skilled forgery test, in the GPDS-160 signature database without using any skilled forgeries in training

    Offline Handwritten Signature Verification - Literature Review

    Full text link
    The area of Handwritten Signature Verification has been broadly researched in the last decades, but remains an open research problem. The objective of signature verification systems is to discriminate if a given signature is genuine (produced by the claimed individual), or a forgery (produced by an impostor). This has demonstrated to be a challenging task, in particular in the offline (static) scenario, that uses images of scanned signatures, where the dynamic information about the signing process is not available. Many advancements have been proposed in the literature in the last 5-10 years, most notably the application of Deep Learning methods to learn feature representations from signature images. In this paper, we present how the problem has been handled in the past few decades, analyze the recent advancements in the field, and the potential directions for future research.Comment: Accepted to the International Conference on Image Processing Theory, Tools and Applications (IPTA 2017

    Hybrid Fusion for Biometrics: Combining Score-level and Decision-level Fusion

    Get PDF
    A general framework of fusion at decision level, which works on ROCs instead of matching scores, is investigated. Under this framework, we further propose a hybrid fusion method, which combines the score-level and decision-level fusions, taking advantage of both fusion modes. The hybrid fusion adaptively tunes itself between the two levels of fusion, and improves the final performance over the original two levels. The proposed hybrid fusion is simple and effective for combining different biometrics

    One-to-many face recognition with bilinear CNNs

    Full text link
    The recent explosive growth in convolutional neural network (CNN) research has produced a variety of new architectures for deep learning. One intriguing new architecture is the bilinear CNN (B-CNN), which has shown dramatic performance gains on certain fine-grained recognition problems [15]. We apply this new CNN to the challenging new face recognition benchmark, the IARPA Janus Benchmark A (IJB-A) [12]. It features faces from a large number of identities in challenging real-world conditions. Because the face images were not identified automatically using a computerized face detection system, it does not have the bias inherent in such a database. We demonstrate the performance of the B-CNN model beginning from an AlexNet-style network pre-trained on ImageNet. We then show results for fine-tuning using a moderate-sized and public external database, FaceScrub [17]. We also present results with additional fine-tuning on the limited training data provided by the protocol. In each case, the fine-tuned bilinear model shows substantial improvements over the standard CNN. Finally, we demonstrate how a standard CNN pre-trained on a large face database, the recently released VGG-Face model [20], can be converted into a B-CNN without any additional feature training. This B-CNN improves upon the CNN performance on the IJB-A benchmark, achieving 89.5% rank-1 recall.Comment: Published version at WACV 201

    Multimodal person recognition for human-vehicle interaction

    Get PDF
    Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies

    Gender and Ethnicity Classification Using Partial Face in Biometric Applications

    Get PDF
    As the number of biometric applications increases, the use of non-ideal information such as images which are not strictly controlled, images taken covertly, or images where the main interest is partially occluded, also increases. Face images are a specific example of this. In these non-ideal instances, other information, such as gender and ethnicity, can be determined to narrow the search space and/or improve the recognition results. Some research exists for gender classification using partial-face images, but there is little research involving ethnic classifications on such images. Few datasets have had the ethnic diversity needed and sufficient subjects for each ethnicity to perform this evaluation. Research is also lacking on how gender and ethnicity classifications on partial face are impacted by age. If the extracted gender and ethnicity information is to be integrated into a larger system, some measure of the reliability of the extracted information is needed. This study will provide an analysis of gender and ethnicity classification on large datasets captured by non-researchers under day-to-day operations using texture, color, and shape features extracted from partial-face regions. This analysis will allow for a greater understanding of the limitations of various facial regions for gender and ethnicity classifications. These limitations will guide the integration of automatically extracted partial-face gender and ethnicity information with a biometric face application in order to improve recognition under non-ideal circumstances. Overall, the results from this work showed that reliable gender and ethnic classification can be achieved from partial face images. Different regions of the face hold varying amount of gender and ethnicity information. For machine classification, the upper face regions hold more ethnicity information while the lower face regions hold more gender information. All regions were impacted by age, but the eyes were impacted the most in texture and color. The shape of the nose changed more with respect to age than any of the other regions
    corecore