121,369 research outputs found
Robust multi-modal and multi-unit feature level fusion of face and iris biometrics
Multi-biometrics has recently emerged as a mean of more robust and effcient
personal verification and identification. Exploiting information from multiple
sources at various levels i.e., feature, score, rank or decision, the false acceptance
and rejection rates can be considerably reduced. Among all, feature level fusion
is relatively an understudied problem. This paper addresses the feature level
fusion for multi-modal and multi-unit sources of information. For multi-modal
fusion the face and iris biometric traits are considered, while the multi-unit fusion
is applied to merge the data from the left and right iris images. The proposed
approach computes the SIFT features from both biometric sources, either multi-
modal or multi-unit. For each source, the extracted SIFT features are selected via
spatial sampling. Then these selected features are finally concatenated together
into a single feature super-vector using serial fusion. This concatenated feature
vector is used to perform classification.
Experimental results from face and iris standard biometric databases are
presented. The reported results clearly show the performance improvements in
classification obtained by applying feature level fusion for both multi-modal and
multi-unit biometrics in comparison to uni-modal classification and score level
fusion
Transparent authentication: Utilising heart rate for user authentication
There has been exponential growth in the use of wearable technologies in the last decade with smart watches having a large share of the market. Smart watches were primarily used for health and fitness purposes but recent years have seen a rise in their deployment in other areas. Recent smart watches are fitted with sensors with enhanced functionality and capabilities. For example, some function as standalone device with the ability to create activity logs and transmit data to a secondary device. The capability has contributed to their increased usage in recent years with researchers focusing on their potential. This paper explores the ability to extract physiological data from smart watch technology to achieve user authentication. The approach is suitable not only because of the capacity for data capture but also easy connectivity with other devices - principally the Smartphone. For the purpose of this study, heart rate data is captured and extracted from 30 subjects continually over an hour. While security is the ultimate goal, usability should also be key consideration. Most bioelectrical signals like heart rate are non-stationary time-dependent signals therefore Discrete Wavelet Transform (DWT) is employed. DWT decomposes the bioelectrical signal into n level sub-bands of detail coefficients and approximation coefficients. Biorthogonal Wavelet (bior 4.4) is applied to extract features from the four levels of detail coefficents. Ten statistical features are extracted from each level of the coffecient sub-band. Classification of each sub-band levels are done using a Feedforward neural Network (FF-NN). The 1 st , 2 nd , 3 rd and 4 th levels had an Equal Error Rate (EER) of 17.20%, 18.17%, 20.93% and 21.83% respectively. To improve the EER, fusion of the four level sub-band is applied at the feature level. The proposed fusion showed an improved result over the initial result with an EER of 11.25% As a one-off authentication decision, an 11% EER is not ideal, its use on a continuous basis makes this more than feasible in practice
Multimodal person recognition for human-vehicle interaction
Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies
When Face Recognition Meets with Deep Learning: an Evaluation of Convolutional Neural Networks for Face Recognition
Deep learning, in particular Convolutional Neural Network (CNN), has achieved
promising results in face recognition recently. However, it remains an open
question: why CNNs work well and how to design a 'good' architecture. The
existing works tend to focus on reporting CNN architectures that work well for
face recognition rather than investigate the reason. In this work, we conduct
an extensive evaluation of CNN-based face recognition systems (CNN-FRS) on a
common ground to make our work easily reproducible. Specifically, we use public
database LFW (Labeled Faces in the Wild) to train CNNs, unlike most existing
CNNs trained on private databases. We propose three CNN architectures which are
the first reported architectures trained using LFW data. This paper
quantitatively compares the architectures of CNNs and evaluate the effect of
different implementation choices. We identify several useful properties of
CNN-FRS. For instance, the dimensionality of the learned features can be
significantly reduced without adverse effect on face recognition accuracy. In
addition, traditional metric learning method exploiting CNN-learned features is
evaluated. Experiments show two crucial factors to good CNN-FRS performance are
the fusion of multiple CNNs and metric learning. To make our work reproducible,
source code and models will be made publicly available.Comment: 7 pages, 4 figures, 7 table
An Efficient Secure Multimodal Biometric Fusion Using Palmprint and Face Image
Biometrics based personal identification is regarded as an effective method for automatically recognizing, with a high confidence a person’s identity. A multimodal biometric systems consolidate the evidence presented by multiple biometric sources and typically better recognition performance compare to system based on a single biometric modality. This paper proposes an authentication method for a multimodal biometric system identification using two traits i.e. face and palmprint. The proposed system is designed for application where the training data contains a face and palmprint. Integrating the palmprint and face features increases robustness of the person authentication. The final decision is made by fusion at matching score level architecture in which features vectors are created independently for query measures and are then compared to the enrolment template, which are stored during database preparation. Multimodal biometric system is developed through fusion of face and palmprint recognition
Privacy-Preserving Facial Recognition Using Biometric-Capsules
Indiana University-Purdue University Indianapolis (IUPUI)In recent years, developers have used the proliferation of biometric sensors in smart devices, along with recent advances in deep learning, to implement an array of biometrics-based recognition systems. Though these systems demonstrate remarkable performance and have seen wide acceptance, they present unique and pressing security and privacy concerns. One proposed method which addresses these concerns is the elegant, fusion-based Biometric-Capsule (BC) scheme. The BC scheme is provably secure, privacy-preserving, cancellable and interoperable in its secure feature fusion design.
In this work, we demonstrate that the BC scheme is uniquely fit to secure state-of-the-art facial verification, authentication and identification systems. We compare the performance of unsecured, underlying biometrics systems to the performance of the BC-embedded systems in order to directly demonstrate the minimal effects of the privacy-preserving BC scheme on underlying system performance. Notably, we demonstrate that, when seamlessly embedded into a state-of-the-art FaceNet and ArcFace verification systems which achieve accuracies of 97.18% and 99.75% on the benchmark LFW dataset, the BC-embedded systems are able to achieve accuracies of 95.13% and 99.13% respectively. Furthermore, we also demonstrate that the BC scheme outperforms or performs as well as several other proposed secure biometric methods
- …