154 research outputs found

    Multi-Level Liveness Verification for Face-Voice Biometric Authentication

    Get PDF
    In this paper we present the details of the multilevel liveness verification (MLLV) framework proposed for realizing a secure face-voice biometric authentication system that can thwart different types of audio and video replay attacks. The proposed MLLV framework based on novel feature extraction and multimodal fusion approaches, uncovers the static and dynamic relationship between voice and face information from speaking faces, and allows multiple levels of security. Experiments with three different speaking corpora VidTIMIT, UCBN and AVOZES shows a significant improvement in system performance in terms of DET curves and equal error rates(EER) for different types of replay and synthesis attacks

    Hand Geometry Techniques: A Review

    Full text link
    Volume 2 Issue 11 (November 2014

    A Bimodal Biometric Student Attendance System

    Get PDF
    A lot of attempts have been made to use biometrics in class attendance systems. Most of the implemented biometric attendance systems are unimodal. Unimodal biometric systems may be spoofed easily, leading to a reduction in recognition accuracy. This paper explores the use of bimodal biometrics to improve the recognition accuracy of automated student attendance systems. The system uses the face and fingerprint to take students’ attendance. The students’ faces were captured using webcam and preprocessed by converting the color images to grey scale images. The grey scale images were then normalized to reduce noise. Principal Component Analysis (PCA) algorithm was used for facial feature extraction while Support Vector Machine (SVM) was used for classification. Fingerprints were captured using a fingerprint reader. A thinning algorithm digitized and extracted the minutiae from the scanned fingerprints. The logical technique (OR) was used to fuse the two biometric data at the decision level. The fingerprint templates and facial images of each user were stored along with their particulars in a database. The implemented system had a minimum recognition accuracy of 87.83%

    A New Hand Based Biometric Modality & An Automated Authentication System

    Get PDF
    With increased adoption of smartphones, security has become important like never before. Smartphones store confidential information and carry out sensitive financial transactions. Biometric sensors such as fingerprint scanners are built in to smartphones to cater to security concerns. However, due to limited size of smartphone, miniaturised sensors are used to capture the biometric data from the user. Other hand based biometric modalities like hand veins and finger veins need specialised thermal/IR sensors which add to the overall cost of the system. In this paper, we introduce a new hand based biometric modality called Fistprint.  Fistprints can be captured using digital camera available in any smartphone. In this work, our contributions are: i) we propose a new non-touch and non-invasive hand based biometric modality called fistprint. Fistprint contains many distinctive elements such as fist shape, fist size, fingers shape and size, knuckles, finger nails, palm crease/wrinkle lines etc. ii) Prepare fistprint DB for the first time. We collected fistprint information of twenty individuals - both males and females aged from 23 years to 45 years of age. Four images of each hand fist (total 160 images) were taken for this purpose. iii) Propose Fistprint Automatic Authentication SysTem (FAAST). iv) Implement FAAST system on Samsung Galaxy smartphone running Android and server side on a windows machine and validate the effectiveness of the proposed modality. The experimental results show the effectiveness of fistprint as a biometric with GAR of 97.5 % at 1.0% FAR

    A Robust Speaking Face Modelling Approach Based on Multilevel Fusion

    Get PDF

    New Mobile Phone and Webcam Hand Images Databases for Personal Authentication and Identification

    Get PDF
    AbstractIn this work we created two hand image databases, usingmobile phone cameras and webcams. Themajor goal of these databases is to build upon aperson's authentication/identification using hand biometrics,decreasing the need for expensive hand scanners. Both databases consist of 3000 hand images, 3 sessions x 5 images (per person)x 200 persons, and are available to freely download. The test protocol is defined for both databases; simple experiments were conducted using the same protocol. The results were encouraging for most of the persons (accuracy was greater than 80%), except for those who rotated their hands in an exaggerated manner in all directions

    Classification and fusion methods for multimodal biometric authentication.

    Get PDF
    Ouyang, Hua.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 81-89).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Biometric Authentication --- p.1Chapter 1.2 --- Multimodal Biometric Authentication --- p.2Chapter 1.2.1 --- Combination of Different Biometric Traits --- p.3Chapter 1.2.2 --- Multimodal Fusion --- p.5Chapter 1.3 --- Audio-Visual Bi-modal Authentication --- p.6Chapter 1.4 --- Focus of This Research --- p.7Chapter 1.5 --- Organization of This Thesis --- p.8Chapter 2 --- Audio-Visual Bi-modal Authentication --- p.10Chapter 2.1 --- Audio-visual Authentication System --- p.10Chapter 2.1.1 --- Why Audio and Mouth? --- p.10Chapter 2.1.2 --- System Overview --- p.11Chapter 2.2 --- XM2VTS Database --- p.12Chapter 2.3 --- Visual Feature Extraction --- p.14Chapter 2.3.1 --- Locating the Mouth --- p.14Chapter 2.3.2 --- Averaged Mouth Images --- p.17Chapter 2.3.3 --- Averaged Optical Flow Images --- p.21Chapter 2.4 --- Audio Features --- p.23Chapter 2.5 --- Video Stream Classification --- p.23Chapter 2.6 --- Audio Stream Classification --- p.25Chapter 2.7 --- Simple Fusion --- p.26Chapter 3 --- Weighted Sum Rules for Multi-modal Fusion --- p.27Chapter 3.1 --- Measurement-Level Fusion --- p.27Chapter 3.2 --- Product Rule and Sum Rule --- p.28Chapter 3.2.1 --- Product Rule --- p.28Chapter 3.2.2 --- Naive Sum Rule (NS) --- p.29Chapter 3.2.3 --- Linear Weighted Sum Rule (WS) --- p.30Chapter 3.3 --- Optimal Weights Selection for WS --- p.31Chapter 3.3.1 --- Independent Case --- p.31Chapter 3.3.2 --- Identical Case --- p.33Chapter 3.4 --- Confidence Measure Based Fusion Weights --- p.35Chapter 4 --- Regularized k-Nearest Neighbor Classifier --- p.39Chapter 4.1 --- Motivations --- p.39Chapter 4.1.1 --- Conventional k-NN Classifier --- p.39Chapter 4.1.2 --- Bayesian Formulation of kNN --- p.40Chapter 4.1.3 --- Pitfalls and Drawbacks of kNN Classifiers --- p.41Chapter 4.1.4 --- Metric Learning Methods --- p.43Chapter 4.2 --- Regularized k-Nearest Neighbor Classifier --- p.46Chapter 4.2.1 --- Metric or Not Metric? --- p.46Chapter 4.2.2 --- Proposed Classifier: RkNN --- p.47Chapter 4.2.3 --- Hyperkernels and Hyper-RKHS --- p.49Chapter 4.2.4 --- Convex Optimization of RkNN --- p.52Chapter 4.2.5 --- Hyper kernel Construction --- p.53Chapter 4.2.6 --- Speeding up RkNN --- p.56Chapter 4.3 --- Experimental Evaluation --- p.57Chapter 4.3.1 --- Synthetic Data Sets --- p.57Chapter 4.3.2 --- Benchmark Data Sets --- p.64Chapter 5 --- Audio-Visual Authentication Experiments --- p.68Chapter 5.1 --- Effectiveness of Visual Features --- p.68Chapter 5.2 --- Performance of Simple Sum Rule --- p.71Chapter 5.3 --- Performances of Individual Modalities --- p.73Chapter 5.4 --- Identification Tasks Using Confidence-based Weighted Sum Rule --- p.74Chapter 5.4.1 --- Effectiveness of WS_M_C Rule --- p.75Chapter 5.4.2 --- WS_M_C v.s. WS_M --- p.76Chapter 5.5 --- Speaker Identification Using RkNN --- p.77Chapter 6 --- Conclusions and Future Work --- p.78Chapter 6.1 --- Conclusions --- p.78Chapter 6.2 --- Important Follow-up Works --- p.80Bibliography --- p.81Chapter A --- Proof of Proposition 3.1 --- p.90Chapter B --- Proof of Proposition 3.2 --- p.9
    • …
    corecore