122 research outputs found

    Implicit Smartphone User Authentication with Sensors and Contextual Machine Learning

    Full text link
    Authentication of smartphone users is important because a lot of sensitive data is stored in the smartphone and the smartphone is also used to access various cloud data and services. However, smartphones are easily stolen or co-opted by an attacker. Beyond the initial login, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data. Hence, this paper proposes a novel authentication system for implicit, continuous authentication of the smartphone user based on behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We propose novel context-based authentication models to differentiate the legitimate smartphone owner versus other users. We systematically show how to achieve high authentication accuracy with different design alternatives in sensor and feature selection, machine learning techniques, context detection and multiple devices. Our system can achieve excellent authentication performance with 98.1% accuracy with negligible system overhead and less than 2.4% battery consumption.Comment: Published on the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) 2017. arXiv admin note: substantial text overlap with arXiv:1703.0352

    A Behavioral Model System for Implicit Mobile Authentication

    Get PDF
    Smartphones are increasingly essential to users’ everyday lives. Security concerns of data compromises are growing, and explicit authentication methods are proving to be inconvenient and insufficient. Meanwhile, users demand quicker and more secure authentication. To address this, a user can be authenticated continuously and implicitly, through understanding consistency in their behavior. This research project develops a Behavioral Model System (BMS) that records users’ behavioral metrics on an Android device and sends them to a server to develop a behavioral model for the user. Once a strong model is generated with TensorFlow, a user’s most recent behavior is queried against the model to authenticate them. The model is tested across its metrics to evaluate the reliability of BMS

    Applying touch gesture to improve application accessing speed on mobile devices.

    Get PDF
    The touch gesture shortcut is one of the most significant contributions to Human-Computer Interaction (HCI). It is used in many fields: e.g., performing web browsing tasks (i.e., moving to the next page, adding bookmarks, etc.) on a smartphone, manipulating a virtual object on a tabletop device and communicating between two touch screen devices. Compared with the traditional Graphic User Interface (GUI), the touch gesture shortcut is more efficient, more natural, it is intuitive and easier to use. With the rapid development of smartphone technology, an increasing number of data items are showing up in users’ mobile devices, such as contacts, installed apps and photos. As a result, it has become troublesome to find a target item on a mobile device with traditional GUI. For example, to find a target app, sliding and browsing through several screens is a necessity. This thesis addresses this challenge by proposing two alternative methods of using a touch gesture shortcut to find a target item (an app, as an example) in a mobile device. Current touch gesture shortcut methods either employ a universal built-in system- defined shortcut template or a gesture-item set, which is defined by users before using the device. In either case, the users need to learn/define first and then recall and draw the gesture to reach the target item according to the template/predefined set. Evidence has shown that compared with GUI, the touch gesture shortcut has an advantage when performing several types of tasks e.g., text editing, picture drawing, audio control, etc. but it is unknown whether it is quicker or more effective than the traditional GUI for finding target apps. This thesis first conducts an exploratory study to understand user memorisation of their Personalized Gesture Shortcuts (PGS) for 15 frequently used mobile apps. An experiment will then be conducted to investigate (1) the users’ recall accuracy on the PGS for finding both frequently and infrequently used target apps, (2) and the speed by which users are able to access the target apps relative to GUI. The results show that the PGS produced a clear speed advantage (1.3s faster on average) over the traditional GUI, while there was an approximate 20% failure rate due to unsuccessful recall on the PGS. To address the unsuccessful recall problem, this thesis explores ways of developing a new interactive approach based on the touch gesture shortcut but without requiring recall or having to be predefined before use. It has been named the Intelligent Launcher in this thesis, and it predicts and launches any intended target app from an unconstrained gesture drawn by the user. To explore how to achieve this, this thesis conducted a third experiment to investigate the relationship between the reasons underlying the user’s gesture creation and the gesture shape (handwriting, non-handwriting or abstract) they used as their shortcut. According to the results, unlike the existing approaches, the thesis proposes that the launcher should predict the users’ intended app from three types of gestures. First, the non-handwriting gestures via the visual similarity between it and the app’s icon; second, the handwriting gestures via the app’s library name plus functionality; and third, the abstract gestures via the app’s usage history. In light of these findings mentioned above, we designed and developed the Intelligent Launcher, which is based on the assumptions drawn from the empirical data. This thesis introduces the interaction, the architecture and the technical details of the launcher. How to use the data from the third experiment to improve the predictions based on a machine learning method, i.e., the Markov Model, is described in this thesis. An evaluation experiment, shows that the Intelligent Launcher has achieved user satisfaction with a prediction accuracy of 96%. As of now, it is still difficult to know which type of gesture a user tends to use. Therefore, a fourth experiment, which focused on exploring the factors that influence the choice of touch gesture shortcut type for accessing a target app is also conducted in this thesis. The results of the experiment show that (1) those who preferred a name-based method used it more consistently and used more letter gestures compared with those who preferred the other three methods; (2) those who preferred the keyword app search method created more letter gestures than other types; (3) those who preferred an iOS system created more drawing gestures than other types; (4) letter gestures were more often used for the apps that were used frequently, whereas drawing gestures were more often used for the apps that were used infrequently; (5) the participants tended to use the same creation method as the preferred method on different days of the experiment. This thesis contributes to the body of Human-Computer Interaction knowledge. It proposes two alternative methods which are more efficient and flexible for finding a target item among a large number of items. The PGS method has been confirmed as being effective and has a clear speed advantage. The Intelligent Launcher has been developed and it demonstrates a novel way of predicting a target item via the gesture user’s drawing. The findings concerning the relationship between the user’s choice of gesture for the shortcut and some of the individual factors have informed the design of a more flexible touch gesture shortcut interface for ”target item finding” tasks. When searching for different types of data items, the Intelligent Launcher is a prototype for finding target apps since the variety in visual appearance of an app and its functionality make it more difficult to predict than other targets, such as a standard phone setting, a contact or a website. However, we believe that the ideas that have been presented in this thesis can be further extended to other types of items, such as videos or photos in a Photo Library, places on a map or clothes in an online store. What is more, this study also leads the way in tackling the advantage of a machine learning method in touch gesture shortcut interactions

    Secure Pick Up: Implicit Authentication When You Start Using the Smartphone

    Full text link
    We propose Secure Pick Up (SPU), a convenient, lightweight, in-device, non-intrusive and automatic-learning system for smartphone user authentication. Operating in the background, our system implicitly observes users' phone pick-up movements, the way they bend their arms when they pick up a smartphone to interact with the device, to authenticate the users. Our SPU outperforms the state-of-the-art implicit authentication mechanisms in three main aspects: 1) SPU automatically learns the user's behavioral pattern without requiring a large amount of training data (especially those of other users) as previous methods did, making it more deployable. Towards this end, we propose a weighted multi-dimensional Dynamic Time Warping (DTW) algorithm to effectively quantify similarities between users' pick-up movements; 2) SPU does not rely on a remote server for providing further computational power, making SPU efficient and usable even without network access; and 3) our system can adaptively update a user's authentication model to accommodate user's behavioral drift over time with negligible overhead. Through extensive experiments on real world datasets, we demonstrate that SPU can achieve authentication accuracy up to 96.3% with a very low latency of 2.4 milliseconds. It reduces the number of times a user has to do explicit authentication by 32.9%, while effectively defending against various attacks.Comment: Published on ACM Symposium on Access Control Models and Technologies (SACMAT) 201

    Implicit Sensor-based Authentication of Smartphone Users with Smartwatch

    Full text link
    Smartphones are now frequently used by end-users as the portals to cloud-based services, and smartphones are easily stolen or co-opted by an attacker. Beyond the initial log-in mechanism, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data, whether in the cloud or in the smartphone. But attackers who have gained access to a logged-in smartphone have no incentive to re-authenticate, so this must be done in an automatic, non-bypassable way. Hence, this paper proposes a novel authentication system, iAuth, for implicit, continuous authentication of the end-user based on his or her behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We design a system that gives accurate authentication using machine learning and sensor data from multiple mobile devices. Our system can achieve 92.1% authentication accuracy with negligible system overhead and less than 2% battery consumption.Comment: Published in Hardware and Architectural Support for Security and Privacy (HASP), 201

    BehavePassDB: Public Database for Mobile Behavioral Biometrics and Benchmark Evaluation

    Full text link
    Mobile behavioral biometrics have become a popular topic of research, reaching promising results in terms of authentication, exploiting a multimodal combination of touchscreen and background sensor data. However, there is no way of knowing whether state-of-the-art classifiers in the literature can distinguish between the notion of user and device. In this article, we present a new database, BehavePassDB, structured into separate acquisition sessions and tasks to mimic the most common aspects of mobile Human-Computer Interaction (HCI). BehavePassDB is acquired through a dedicated mobile app installed on the subjects devices, also including the case of different users on the same device for evaluation. We propose a standard experimental protocol and benchmark for the research community to perform a fair comparison of novel approaches with the state of the art1. We propose and evaluate a system based on Long-Short Term Memory (LSTM) architecture with triplet loss and modality fusion at score levelThis project has received funding from the European Unions Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement no. 860315, and from Orange Labs. R. Tolosana and R. Vera-Rodriguez are also supported by INTER-ACTION (PID2021-126521OB-I00 MICINN/FEDER
    • …
    corecore