1,701 research outputs found

    Transparent authentication: Utilising heart rate for user authentication

    Get PDF
    There has been exponential growth in the use of wearable technologies in the last decade with smart watches having a large share of the market. Smart watches were primarily used for health and fitness purposes but recent years have seen a rise in their deployment in other areas. Recent smart watches are fitted with sensors with enhanced functionality and capabilities. For example, some function as standalone device with the ability to create activity logs and transmit data to a secondary device. The capability has contributed to their increased usage in recent years with researchers focusing on their potential. This paper explores the ability to extract physiological data from smart watch technology to achieve user authentication. The approach is suitable not only because of the capacity for data capture but also easy connectivity with other devices - principally the Smartphone. For the purpose of this study, heart rate data is captured and extracted from 30 subjects continually over an hour. While security is the ultimate goal, usability should also be key consideration. Most bioelectrical signals like heart rate are non-stationary time-dependent signals therefore Discrete Wavelet Transform (DWT) is employed. DWT decomposes the bioelectrical signal into n level sub-bands of detail coefficients and approximation coefficients. Biorthogonal Wavelet (bior 4.4) is applied to extract features from the four levels of detail coefficents. Ten statistical features are extracted from each level of the coffecient sub-band. Classification of each sub-band levels are done using a Feedforward neural Network (FF-NN). The 1 st , 2 nd , 3 rd and 4 th levels had an Equal Error Rate (EER) of 17.20%, 18.17%, 20.93% and 21.83% respectively. To improve the EER, fusion of the four level sub-band is applied at the feature level. The proposed fusion showed an improved result over the initial result with an EER of 11.25% As a one-off authentication decision, an 11% EER is not ideal, its use on a continuous basis makes this more than feasible in practice

    Biometric Authentication System on Mobile Personal Devices

    Get PDF
    We propose a secure, robust, and low-cost biometric authentication system on the mobile personal device for the personal network. The system consists of the following five key modules: 1) face detection; 2) face registration; 3) illumination normalization; 4) face verification; and 5) information fusion. For the complicated face authentication task on the devices with limited resources, the emphasis is largely on the reliability and applicability of the system. Both theoretical and practical considerations are taken. The final system is able to achieve an equal error rate of 2% under challenging testing protocols. The low hardware and software cost makes the system well adaptable to a large range of security applications

    BehavePassDB: Public Database for Mobile Behavioral Biometrics and Benchmark Evaluation

    Full text link
    Mobile behavioral biometrics have become a popular topic of research, reaching promising results in terms of authentication, exploiting a multimodal combination of touchscreen and background sensor data. However, there is no way of knowing whether state-of-the-art classifiers in the literature can distinguish between the notion of user and device. In this article, we present a new database, BehavePassDB, structured into separate acquisition sessions and tasks to mimic the most common aspects of mobile Human-Computer Interaction (HCI). BehavePassDB is acquired through a dedicated mobile app installed on the subjects devices, also including the case of different users on the same device for evaluation. We propose a standard experimental protocol and benchmark for the research community to perform a fair comparison of novel approaches with the state of the art1. We propose and evaluate a system based on Long-Short Term Memory (LSTM) architecture with triplet loss and modality fusion at score levelThis project has received funding from the European Unions Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement no. 860315, and from Orange Labs. R. Tolosana and R. Vera-Rodriguez are also supported by INTER-ACTION (PID2021-126521OB-I00 MICINN/FEDER

    Investigating the impact of combining handwritten signature and keyboard keystroke dynamics for gender prediction

    Get PDF
    © 2019 IEEE. The use of soft-biometric data as an auxiliary tool on user identification is already well known. Gender, handorientation and emotional state are some examples which can be called soft-biometrics. These soft-biometric data can be predicted directly from the biometric templates. It is very common to find researches using physiological modalities for soft-biometric prediction, but behavioural biometric is often not well explored for this context. Among the behavioural biometric modalities, keystroke dynamics and handwriting signature have been widely explored for user identification, including some soft-biometric predictions. However, in these modalities, the soft-biometric prediction is usually done in an individual way. In order to fill this space, this study aims to investigate whether the combination of those two biometric modalities can impact the performance of a soft-biometric data, gender prediction. The main aim is to assess the impact of combining data from two different biometric sources in gender prediction. Our findings indicated gains in terms of performance for gender prediction when combining these two biometric modalities, when compared to the individual ones

    Biometric presentation attack detection: beyond the visible spectrum

    Full text link
    The increased need for unattended authentication in multiple scenarios has motivated a wide deployment of biometric systems in the last few years. This has in turn led to the disclosure of security concerns specifically related to biometric systems. Among them, presentation attacks (PAs, i.e., attempts to log into the system with a fake biometric characteristic or presentation attack instrument) pose a severe threat to the security of the system: any person could eventually fabricate or order a gummy finger or face mask to impersonate someone else. In this context, we present a novel fingerprint presentation attack detection (PAD) scheme based on i) a new capture device able to acquire images within the short wave infrared (SWIR) spectrum, and i i) an in-depth analysis of several state-of-theart techniques based on both handcrafted and deep learning features. The approach is evaluated on a database comprising over 4700 samples, stemming from 562 different subjects and 35 different presentation attack instrument (PAI) species. The results show the soundness of the proposed approach with a detection equal error rate (D-EER) as low as 1.35% even in a realistic scenario where five different PAI species are considered only for testing purposes (i.e., unknown attacks

    BehavePassDB: Public Database for Mobile Behavioral Biometrics and Benchmark Evaluation

    Full text link
    Mobile behavioral biometrics have become a popular topic of research, reaching promising results in terms of authentication, exploiting a multimodal combination of touchscreen and background sensor data. However, there is no way of knowing whether state-of-the-art classifiers in the literature can distinguish between the notion of user and device. In this article, we present a new database, BehavePassDB, structured into separate acquisition sessions and tasks to mimic the most common aspects of mobile Human-Computer Interaction (HCI). BehavePassDB is acquired through a dedicated mobile app installed on the subjects' devices, also including the case of different users on the same device for evaluation. We propose a standard experimental protocol and benchmark for the research community to perform a fair comparison of novel approaches with the state of the art. We propose and evaluate a system based on Long-Short Term Memory (LSTM) architecture with triplet loss and modality fusion at score level.Comment: 11 pages, 3 figure
    corecore