1,013 research outputs found

    DeepKey: An EEG and Gait Based Dual-Authentication System

    Full text link
    Biometric authentication involves various technologies to identify individuals by exploiting their unique, measurable physiological and behavioral characteristics. However, traditional biometric authentication systems (e.g., face recognition, iris, retina, voice, and fingerprint) are facing an increasing risk of being tricked by biometric tools such as anti-surveillance masks, contact lenses, vocoder, or fingerprint films. In this paper, we design a multimodal biometric authentication system named Deepkey, which uses both Electroencephalography (EEG) and gait signals to better protect against such risk. Deepkey consists of two key components: an Invalid ID Filter Model to block unauthorized subjects and an identification model based on attention-based Recurrent Neural Network (RNN) to identify a subject`s EEG IDs and gait IDs in parallel. The subject can only be granted access while all the components produce consistent evidence to match the user`s proclaimed identity. We implement Deepkey with a live deployment in our university and conduct extensive empirical experiments to study its technical feasibility in practice. DeepKey achieves the False Acceptance Rate (FAR) and the False Rejection Rate (FRR) of 0 and 1.0%, respectively. The preliminary results demonstrate that Deepkey is feasible, show consistent superior performance compared to a set of methods, and has the potential to be applied to the authentication deployment in real world settings.Comment: 22 page

    BeCAPTCHA: Behavioral Bot Detection using Touchscreen and Mobile Sensors benchmarked on HuMIdb

    Full text link
    In this paper we study the suitability of a new generation of CAPTCHA methods based on smartphone interactions. The heterogeneous flow of data generated during the interaction with the smartphones can be used to model human behavior when interacting with the technology and improve bot detection algorithms. For this, we propose BeCAPTCHA, a CAPTCHA method based on the analysis of the touchscreen information obtained during a single drag and drop task in combination with the accelerometer data. The goal of BeCAPTCHA is to determine whether the drag and drop task was realized by a human or a bot. We evaluate the method by generating fake samples synthesized with Generative Adversarial Neural Networks and handcrafted methods. Our results suggest the potential of mobile sensors to characterize the human behavior and develop a new generation of CAPTCHAs. The experiments are evaluated with HuMIdb (Human Mobile Interaction database), a novel multimodal mobile database that comprises 14 mobile sensors acquired from 600 users. HuMIdb is freely available to the research community.Comment: arXiv admin note: substantial text overlap with arXiv:2002.0091

    Authenticating users through their arm movement patterns

    Full text link
    In this paper, we propose four continuous authentication designs by using the characteristics of arm movements while individuals walk. The first design uses acceleration of arms captured by a smartwatch's accelerometer sensor, the second design uses the rotation of arms captured by a smartwatch's gyroscope sensor, third uses the fusion of both acceleration and rotation at the feature-level and fourth uses the fusion at score-level. Each of these designs is implemented by using four classifiers, namely, k nearest neighbors (k-NN) with Euclidean distance, Logistic Regression, Multilayer Perceptrons, and Random Forest resulting in a total of sixteen authentication mechanisms. These authentication mechanisms are tested under three different environments, namely an intra-session, inter-session on a dataset of 40 users and an inter-phase on a dataset of 12 users. The sessions of data collection were separated by at least ten minutes, whereas the phases of data collection were separated by at least three months. Under the intra-session environment, all of the twelve authentication mechanisms achieve a mean dynamic false accept rate (DFAR) of 0% and dynamic false reject rate (DFRR) of 0%. For the inter-session environment, feature level fusion-based design with classifier k-NN achieves the best error rates that are a mean DFAR of 2.2% and DFRR of 4.2%. The DFAR and DFRR increased from 5.68% and 4.23% to 15.03% and 14.62% respectively when feature level fusion-based design with classifier k-NN was tested under the inter-phase environment on a dataset of 12 users

    Sensor-based Continuous Authentication of Smartphones' Users Using Behavioral Biometrics: A Contemporary Survey

    Full text link
    Mobile devices and technologies have become increasingly popular, offering comparable storage and computational capabilities to desktop computers allowing users to store and interact with sensitive and private information. The security and protection of such personal information are becoming more and more important since mobile devices are vulnerable to unauthorized access or theft. User authentication is a task of paramount importance that grants access to legitimate users at the point-of-entry and continuously through the usage session. This task is made possible with today's smartphones' embedded sensors that enable continuous and implicit user authentication by capturing behavioral biometrics and traits. In this paper, we survey more than 140 recent behavioral biometric-based approaches for continuous user authentication, including motion-based methods (28 studies), gait-based methods (19 studies), keystroke dynamics-based methods (20 studies), touch gesture-based methods (29 studies), voice-based methods (16 studies), and multimodal-based methods (34 studies). The survey provides an overview of the current state-of-the-art approaches for continuous user authentication using behavioral biometrics captured by smartphones' embedded sensors, including insights and open challenges for adoption, usability, and performance.Comment: 19 page

    Active Authentication on Mobile Devices via Stylometry, Application Usage, Web Browsing, and GPS Location

    Full text link
    Active authentication is the problem of continuously verifying the identity of a person based on behavioral aspects of their interaction with a computing device. In this study, we collect and analyze behavioral biometrics data from 200subjects, each using their personal Android mobile device for a period of at least 30 days. This dataset is novel in the context of active authentication due to its size, duration, number of modalities, and absence of restrictions on tracked activity. The geographical colocation of the subjects in the study is representative of a large closed-world environment such as an organization where the unauthorized user of a device is likely to be an insider threat: coming from within the organization. We consider four biometric modalities: (1) text entered via soft keyboard, (2) applications used, (3) websites visited, and (4) physical location of the device as determined from GPS (when outdoors) or WiFi (when indoors). We implement and test a classifier for each modality and organize the classifiers as a parallel binary decision fusion architecture. We are able to characterize the performance of the system with respect to intruder detection time and to quantify the contribution of each modality to the overall performance.Comment: Accepted for Publication in the IEEE Systems Journa

    Continuous Authentication Using One-class Classifiers and their Fusion

    Full text link
    While developing continuous authentication systems (CAS), we generally assume that samples from both genuine and impostor classes are readily available. However, the assumption may not be true in certain circumstances. Therefore, we explore the possibility of implementing CAS using only genuine samples. Specifically, we investigate the usefulness of four one-class classifiers OCC (elliptic envelope, isolation forest, local outliers factor, and one-class support vector machines) and their fusion. The performance of these classifiers was evaluated on four distinct behavioral biometric datasets, and compared with eight multi-class classifiers (MCC). The results demonstrate that if we have sufficient training data from the genuine user the OCC, and their fusion can closely match the performance of the majority of MCC. Our findings encourage the research community to use OCC in order to build CAS as they do not require knowledge of impostor class during the enrollment process.Comment: 2018 IEEE 4th International Conference on Identity, Security, and Behavior Analysis (ISBA) 978-1-5386-2248-3/18/$31.00 (c) 2018 IEE

    A Comprehensive Overview of Biometric Fusion

    Full text link
    The performance of a biometric system that relies on a single biometric modality (e.g., fingerprints only) is often stymied by various factors such as poor data quality or limited scalability. Multibiometric systems utilize the principle of fusion to combine information from multiple sources in order to improve recognition accuracy whilst addressing some of the limitations of single-biometric systems. The past two decades have witnessed the development of a large number of biometric fusion schemes. This paper presents an overview of biometric fusion with specific focus on three questions: what to fuse, when to fuse, and how to fuse. A comprehensive review of techniques incorporating ancillary information in the biometric recognition pipeline is also presented. In this regard, the following topics are discussed: (i) incorporating data quality in the biometric recognition pipeline; (ii) combining soft biometric attributes with primary biometric identifiers; (iii) utilizing contextual information to improve biometric recognition accuracy; and (iv) performing continuous authentication using ancillary information. In addition, the use of information fusion principles for presentation attack detection and multibiometric cryptosystems is also discussed. Finally, some of the research challenges in biometric fusion are enumerated. The purpose of this article is to provide readers a comprehensive overview of the role of information fusion in biometrics.Comment: Accepted for publication in Information Fusio

    BeCAPTCHA: Detecting Human Behavior in Smartphone Interaction using Multiple Inbuilt Sensors

    Full text link
    We introduce a novel multimodal mobile database called HuMIdb (Human Mobile Interaction database) that comprises 14 mobile sensors acquired from 600 users. The heterogeneous flow of data generated during the interaction with the smartphones can be used to model human behavior when interacting with the technology. Based on this new dataset, we explore the capacity of smartphone sensors to improve bot detection. We propose a CAPTCHA method based on the analysis of the information obtained during a single drag and drop task. We evaluate the method generating fake samples synthesized with Generative Adversarial Neural Networks and handcrafted methods. Our results suggest the potential of mobile sensors to characterize the human behavior and develop a new generation of CAPTCHAs.Comment: AAAI-20 Workshop on Artificial Intelligence for Ciber Security (AICS), New York, NY, USA, February 2020. AICS-2020 Worksho

    An Algorithmic Framework For Gait Analysis and Gait-Based Biometric Authentication

    Get PDF
    Small or large deviations present in someone\u27s gait patterns could be attributed to either their unique muscular-skeletal structure or possible neurological disorders, such as Parkinson’s Disease (PD) or stroke. With the rapid development of wearable technologies, it is now possible to quantitatively measure such deviations. In this thesis we develop an algorithmic framework that identifies the deviations caused from neurological disorders, that can have applications in gait physical therapy, or from unique individual behavior, which can have applications in behavioral biometrics. First, to objectively extract gait phases, an infinite Gaussian mixture model is presented to classify different gait phases, and a parallel particle filter is designed to estimate and update the model parameters in real-time. To objectively classify gait disorders caused by PD and stroke diseases and to facilitate gait physical therapy, an advanced machine learning method, multi-task learning, is used to jointly train classification models of a subject\u27s gait. The proposed method significantly improves the performance when compared to the baseline solutions and is able to identify parameters that can be used to distinguish between the gait abnormalities and help therapists provide targeted treatment in clinics. Finally, we present a new approach for identifying unique gait patterns, and provide gait-based biometric authentication. For sensing, we use wearable shoes or socks capable of recording acceleration and ground contact forces. The proposed approach relies on multimodal learning, with a neural network of bimodal-deep auto-encoders, and outperforms existing state of art solutions

    Deep Learning-Based Gait Recognition Using Smartphones in the Wild

    Full text link
    Compared to other biometrics, gait is difficult to conceal and has the advantage of being unobtrusive. Inertial sensors, such as accelerometers and gyroscopes, are often used to capture gait dynamics. These inertial sensors are commonly integrated into smartphones and are widely used by the average person, which makes gait data convenient and inexpensive to collect. In this paper, we study gait recognition using smartphones in the wild. In contrast to traditional methods, which often require a person to walk along a specified road and/or at a normal walking speed, the proposed method collects inertial gait data under unconstrained conditions without knowing when, where, and how the user walks. To obtain good person identification and authentication performance, deep-learning techniques are presented to learn and model the gait biometrics based on walking data. Specifically, a hybrid deep neural network is proposed for robust gait feature representation, where features in the space and time domains are successively abstracted by a convolutional neural network and a recurrent neural network. In the experiments, two datasets collected by smartphones for a total of 118 subjects are used for evaluations. The experiments show that the proposed method achieves higher than 93.5\% and 93.7\% accuracies in person identification and authentication, respectively.Comment: IEEE Transactions on Information Forensics and Security, 15(1), 202
    • …
    corecore