33 research outputs found
Biohashing: two factor authentication featuring fingerprint data and tokenised random number
Human authentication is the security task whose job is to limit access to physical locations or computer network only to those with authorisation. This is done by equipped authorised users with passwords, tokens or using their biometrics. Unfortunately, the first two suffer a lack of security as they are easy being forgotten and stolen; even biometrics also suffers from some inherent limitation and specific security threats. A more practical approach is to combine two or more factor authenticator to reap benefits in security or convenient or both. This paper proposed a novel two factor authenticator based on iterated inner products between tokenised pseudo-random number and the user specific fingerprint feature, which generated from the integrated wavelet and Fourier-Mellin transform, and hence produce a set of user specific compact code that coined as BioHashing. BioHashing highly tolerant of data capture offsets, with same user fingerprint data resulting in highly correlated bitstrings. Moreover, there is no deterministic way to get the user specific code without having both token with random data and user fingerprint feature. This would protect us for instance against biometric fabrication by changing the user specific credential, is as simple as changing the token containing the random data. The BioHashing has significant functional advantages over solely biometrics i.e. zero equal error rate point and clean separation of the genuine and imposter populations, thereby allowing elimination of false accept rates without suffering from increased occurrence of false reject rates. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved
Face recognition using wavelet transform and non-negative matrix factorization
This paper demonstrates a novel subspace projection technique via Non-Negative Matrix Factorization (NMF) to represent human facial image in low frequency subband, which is able to realize through the wavelet transform. Wavelet transform (WT), is used to reduce the noise and produce a representation in the low frequency domain, and hence making the facial images insensitive to facial expression and small occlusion. After wavelet decomposition, NMF is performed to produce region or part-based representations of the images. Non-negativity is a useful constraint to generate expressiveness in the reconstruction of faces. The simulation results on Essex and ORL database show that the hybrid of NMF and the best wavelet filter will yield better verification rate and shorter training time. The optimum results of 98.5% and 95.5% are obtained from Essex and ORL Database, respectively. These results are compared with our baseline method, Principal Component Analysis (PCA)
Bio-discretization: Biometrics authentication featuring face data and tokenised random number
With the wonders of the Internet and the promises of the worldwide information infrastructure, a highly secure authentication system is desirable. Biometric has been deployed in this purpose as it is a unique identifier. However, it also suffers from inherent limitations and specific security threats such as biometric fabrication. To alleviate the liabilities of the biometric, a combination of token and biometric for user authentication and verification is introduced. All user data is kept in the token and human can get rid of the task of remembering passwords. The proposed framework is named as Bio-Discretization. Bio-Discretization is performed on the face image features, which is generated from Non-Negative Matrix Factorization (NMF) in the wavelet domain to produce a set of unique compact bitstring by iterated inner product between a set of pseudo random numbers and face images. Bio-Discretization possesses high data capture offset tolerance, with highly correlated bitstring for intraclass data. This approach is highly desirable in a secure environment and it outperforms the classic authentication scheme
Integration of wavelet packet decomposition and random secret in iris verification
This paper presents a novel method of integrating the user-specific secret pseudo-random number and iris feature extracted from Wavelet Packet Transform as a feature vector for iris verification. The proposed method helps to improve the traditional iris verification system which depends solely on iris feature. The security risks such as iris fabrication can be avoided through the token replacement. Experimental results show that the proposed method has an encouraging performance. The optimum result is obtained for mean scheme Wavelet Packet Transform with secret pseudo-random number using 2 levels of decomposition, in which an excellent verification rate is produced
Anchored kernel hashing for cancelable template protection for cross-spectral periocular data
Periocular characteristics is gaining prominence in biometric systems and surveillance systems that operate either in NIR spectrum or visible spectrum. While the ocular information can be well utilized, there exists a challenge to compare images from different spectra such as Near-Infra-Red (NIR) versus Visible spectrum (VIS). In addition, the ocular biometric templates from both NIR and VIS domain need to be protected after the extraction of features to avoid the leakage or linkability of biometric data. In this work, we explore a new approach based on anchored kernel hashing to obtain a cancelable biometric template that is both discriminative for recognition purposes while preserving privacy. The key benefit is that the proposed approach not only works for both NIR and the Visible spectrum, it can also be used with good accuracy for cross-spectral protected template comparison. Through the set of experiments using a cross-spectral periocular database, we demonstrate the performance with EER=1.39% and EER=1.61% for NIR and VIS protected templates respectively. We further present a set of cross-spectral template comparison by comparing the protected templates from one spectrum to another spectra to demonstrate the applicability of the proposed approach
