461 research outputs found

    Implicit Smartphone User Authentication with Sensors and Contextual Machine Learning

    Full text link
    Authentication of smartphone users is important because a lot of sensitive data is stored in the smartphone and the smartphone is also used to access various cloud data and services. However, smartphones are easily stolen or co-opted by an attacker. Beyond the initial login, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data. Hence, this paper proposes a novel authentication system for implicit, continuous authentication of the smartphone user based on behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We propose novel context-based authentication models to differentiate the legitimate smartphone owner versus other users. We systematically show how to achieve high authentication accuracy with different design alternatives in sensor and feature selection, machine learning techniques, context detection and multiple devices. Our system can achieve excellent authentication performance with 98.1% accuracy with negligible system overhead and less than 2.4% battery consumption.Comment: Published on the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) 2017. arXiv admin note: substantial text overlap with arXiv:1703.0352

    Implicit Sensor-based Authentication of Smartphone Users with Smartwatch

    Full text link
    Smartphones are now frequently used by end-users as the portals to cloud-based services, and smartphones are easily stolen or co-opted by an attacker. Beyond the initial log-in mechanism, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data, whether in the cloud or in the smartphone. But attackers who have gained access to a logged-in smartphone have no incentive to re-authenticate, so this must be done in an automatic, non-bypassable way. Hence, this paper proposes a novel authentication system, iAuth, for implicit, continuous authentication of the end-user based on his or her behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We design a system that gives accurate authentication using machine learning and sensor data from multiple mobile devices. Our system can achieve 92.1% authentication accuracy with negligible system overhead and less than 2% battery consumption.Comment: Published in Hardware and Architectural Support for Security and Privacy (HASP), 201

    Activity-Based User Authentication Using Smartwatches

    Get PDF
    Smartwatches, which contain an accelerometer and gyroscope, have recently been used to implement gait and gesture- based biometrics; however, the prior studies have long-established drawbacks. For example, data for both training and evaluation was captured from single sessions (which is not realistic and can lead to overly optimistic performance results), and in cases when the multi-day scenario was considered, the evaluation was often either done improperly or the results are very poor (i.e., greater than 20% of EER). Moreover, limited activities were considered (i.e., gait or gestures), and data captured within a controlled environment which tends to be far less realistic for real world applications. Therefore, this study remedies these past problems by training and evaluating the smartwatch-based biometric system on data from different days, using large dataset that involved the participation of 60 users, and considering different activities (i.e., normal walking (NW), fast walking (FW), typing on a PC keyboard (TypePC), playing mobile game (GameM), and texting on mobile (TypeM)). Unlike the prior art that focussed on simply laboratory controlled data, a more realistic dataset, which was captured within un-constrained environment, is used to evaluate the performance of the proposed system. Two principal experiments were carried out focusing upon constrained and un-constrained environments. The first experiment included a comprehensive analysis of the aforementioned activities and tested under two different scenarios (i.e., same and cross day). By using all the extracted features (i.e., 88 features) and the same day evaluation, EERs of the acceleration readings were 0.15%, 0.31%, 1.43%, 1.52%, and 1.33% for the NW, FW, TypeM, TypePC, and GameM respectively. The EERs were increased to 0.93%, 3.90%, 5.69%, 6.02%, and 5.61% when the cross-day data was utilized. For comparison, a more selective set of features was used and significantly maximize the system performance under the cross day scenario, at best EERs of 0.29%, 1.31%, 2.66%, 3.83%, and 2.3% for the aforementioned activities respectively. A realistic methodology was used in the second experiment by using data collected within unconstrained environment. A light activity detection approach was developed to divide the raw signals into gait (i.e., NW and FW) and stationary activities. Competitive results were reported with EERs of 0.60%, 0% and 3.37% for the NW, FW, and stationary activities respectively. The findings suggest that the nature of the signals captured are sufficiently discriminative to be useful in performing transparent and continuous user authentication.University of Kuf

    Securing Cyber-Physical Social Interactions on Wrist-worn Devices

    Get PDF
    Since ancient Greece, handshaking has been commonly practiced between two people as a friendly gesture to express trust and respect, or form a mutual agreement. In this article, we show that such physical contact can be used to bootstrap secure cyber contact between the smart devices worn by users. The key observation is that during handshaking, although belonged to two different users, the two hands involved in the shaking events are often rigidly connected, and therefore exhibit very similar motion patterns. We propose a novel key generation system, which harvests motion data during user handshaking from the wrist-worn smart devices such as smartwatches or fitness bands, and exploits the matching motion patterns to generate symmetric keys on both parties. The generated keys can be then used to establish a secure communication channel for exchanging data between devices. This provides a much more natural and user-friendly alternative for many applications, e.g., exchanging/sharing contact details, friending on social networks, or even making payments, since it doesn’t involve extra bespoke hardware, nor require the users to perform pre-defined gestures. We implement the proposed key generation system on off-the-shelf smartwatches, and extensive evaluation shows that it can reliably generate 128-bit symmetric keys just after around 1s of handshaking (with success rate >99%), and is resilient to different types of attacks including impersonate mimicking attacks, impersonate passive attacks, or eavesdropping attacks. Specifically, for real-time impersonate mimicking attacks, in our experiments, the Equal Error Rate (EER) is only 1.6% on average. We also show that the proposed key generation system can be extremely lightweight and is able to run in-situ on the resource-constrained smartwatches without incurring excessive resource consumption

    Improving the Security of Smartwatch Payment with Deep Learning

    Full text link
    Making contactless payments using a smartwatch is increasingly popular, but this payment medium lacks traditional biometric security measures such as facial or fingerprint recognition. In 2022, Sturgess et al. proposed WatchAuth, a system for authenticating smartwatch payments using the physical gesture of reaching towards a payment terminal. While effective, the system requires the user to undergo a burdensome enrolment period to achieve acceptable error levels. In this dissertation, we explore whether applications of deep learning can reduce the number of gestures a user must provide to enrol into an authentication system for smartwatch payment. We firstly construct a deep-learned authentication system that outperforms the current state-of-the-art, including in a scenario where the target user has provided a limited number of gestures. We then develop a regularised autoencoder model for generating synthetic user-specific gestures. We show that using these gestures in training improves classification ability for an authentication system. Through this technique we can reduce the number of gestures required to enrol a user into a WatchAuth-like system without negatively impacting its error rates.Comment: Master's thesis, 74 pages. 32 figure
    • …
    corecore