2,363 research outputs found

    Multimodal person recognition for human-vehicle interaction

    Get PDF
    Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies

    Robust multi-modal and multi-unit feature level fusion of face and iris biometrics

    Get PDF
    Multi-biometrics has recently emerged as a mean of more robust and effcient personal verification and identification. Exploiting information from multiple sources at various levels i.e., feature, score, rank or decision, the false acceptance and rejection rates can be considerably reduced. Among all, feature level fusion is relatively an understudied problem. This paper addresses the feature level fusion for multi-modal and multi-unit sources of information. For multi-modal fusion the face and iris biometric traits are considered, while the multi-unit fusion is applied to merge the data from the left and right iris images. The proposed approach computes the SIFT features from both biometric sources, either multi- modal or multi-unit. For each source, the extracted SIFT features are selected via spatial sampling. Then these selected features are finally concatenated together into a single feature super-vector using serial fusion. This concatenated feature vector is used to perform classification. Experimental results from face and iris standard biometric databases are presented. The reported results clearly show the performance improvements in classification obtained by applying feature level fusion for both multi-modal and multi-unit biometrics in comparison to uni-modal classification and score level fusion

    Complementary Feature Level Data Fusion for Biometric Authentication Using Neural Networks

    Get PDF
    Data fusion as a formal research area is referred to as multi‐sensor data fusion. The premise is that combined data from multiple sources can provide more meaningful, accurate and reliable information than that provided by data from a single source. There are many application areas in military and security as well as civilian domains. Multi‐sensor data fusion as applied to biometric authentication is termed multi‐modal biometrics. Though based on similar premises, and having many similarities to formal data fusion, multi‐modal biometrics has some differences in relation to data fusion levels. The objective of the current study was to apply feature level fusion of fingerprint feature and keystroke dynamics data for authentication purposes, utilizing Artificial Neural Networks (ANNs) as a classifier. Data fusion was performed adopting the complementary paradigm, which utilized all processed data from both sources. Experimental results returned a false acceptance rate (FAR) of 0.0 and a worst case false rejection rate (FRR) of 0.0004. This shows a worst case performance that is at least as good as most other research in the field. The experimental results also demonstrated that data fusion gave a better outcome than either fingerprint or keystroke dynamics alone

    A Robust Speaking Face Modelling Approach Based on Multilevel Fusion

    Get PDF

    Bimodal Biometric Verification Mechanism using fingerprint and face images(BBVMFF)

    Get PDF
    An increased demand of biometric authentication coupled with automation of systems is observed in the recent times. Generally biometric recognition systems currently used consider only a single biometric characteristic for verification or authentication. Researchers have proved the inefficiencies in unimodal biometric systems and propagated the adoption of multimodal biometric systems for verification. This paper introduces Bi-modal Biometric Verification Mechanism using Fingerprint and Face (BBVMFF). The BBVMFF considers the frontal face and fingerprint biometric characteristics of users for verification. The BBVMFF Considers both the Gabor phase and magnitude features as biometric trait definitions and simple lightweight feature level fusion algorithm. The fusion algorithm proposed enables the applicability of the proposed BBVMFF in unimodal and Bi-modal modes proved by the experimental results presented

    Biometric liveness checking using multimodal fuzzy fusion

    Get PDF
    corecore