2,861 research outputs found

    Authentication of Smartphone Users Based on Activity Recognition and Mobile Sensing

    Get PDF
    Smartphones are context-aware devices that provide a compelling platform for ubiquitous computing and assist users in accomplishing many of their routine tasks anytime and anywhere, such as sending and receiving emails. The nature of tasks conducted with these devices has evolved with the exponential increase in the sensing and computing capabilities of a smartphone. Due to the ease of use and convenience, many users tend to store their private data, such as personal identifiers and bank account details, on their smartphone. However, this sensitive data can be vulnerable if the device gets stolen or lost. A traditional approach for protecting this type of data on mobile devices is to authenticate users with mechanisms such as PINs, passwords, and fingerprint recognition. However, these techniques are vulnerable to user compliance and a plethora of attacks, such as smudge attacks. The work in this paper addresses these challenges by proposing a novel authentication framework, which is based on recognizing the behavioral traits of smartphone users using the embedded sensors of smartphone, such as Accelerometer, Gyroscope and Magnetometer. The proposed framework also provides a platform for carrying out multi-class smart user authentication, which provides different levels of access to a wide range of smartphone users. This work has been validated with a series of experiments, which demonstrate the effectiveness of the proposed framework

    Implicit Smartphone User Authentication with Sensors and Contextual Machine Learning

    Full text link
    Authentication of smartphone users is important because a lot of sensitive data is stored in the smartphone and the smartphone is also used to access various cloud data and services. However, smartphones are easily stolen or co-opted by an attacker. Beyond the initial login, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data. Hence, this paper proposes a novel authentication system for implicit, continuous authentication of the smartphone user based on behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We propose novel context-based authentication models to differentiate the legitimate smartphone owner versus other users. We systematically show how to achieve high authentication accuracy with different design alternatives in sensor and feature selection, machine learning techniques, context detection and multiple devices. Our system can achieve excellent authentication performance with 98.1% accuracy with negligible system overhead and less than 2.4% battery consumption.Comment: Published on the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) 2017. arXiv admin note: substantial text overlap with arXiv:1703.0352

    Completely Automated Public Physical test to tell Computers and Humans Apart: A usability study on mobile devices

    Get PDF
    A very common approach adopted to fight the increasing sophistication and dangerousness of malware and hacking is to introduce more complex authentication mechanisms. This approach, however, introduces additional cognitive burdens for users and lowers the whole authentication mechanism acceptability to the point of making it unusable. On the contrary, what is really needed to fight the onslaught of automated attacks to users data and privacy is to first tell human and computers apart and then distinguish among humans to guarantee correct authentication. Such an approach is capable of completely thwarting any automated attempt to achieve unwarranted access while it allows keeping simple the mechanism dedicated to recognizing the legitimate user. This kind of approach is behind the concept of Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA), yet CAPTCHA leverages cognitive capabilities, thus the increasing sophistication of computers calls for more and more difficult cognitive tasks that make them either very long to solve or very prone to false negatives. We argue that this problem can be overcome by substituting the cognitive component of CAPTCHA with a different property that programs cannot mimic: the physical nature. In past work we have introduced the Completely Automated Public Physical test to tell Computer and Humans Apart (CAPPCHA) as a way to enhance the PIN authentication method for mobile devices and we have provided a proof of concept implementation. Similarly to CAPTCHA, this mechanism can also be used to prevent automated programs from abusing online services. However, to evaluate the real efficacy of the proposed scheme, an extended empirical assessment of CAPPCHA is required as well as a comparison of CAPPCHA performance with the existing state of the art. To this aim, in this paper we carry out an extensive experimental study on both the performance and the usability of CAPPCHA involving a high number of physical users, and we provide comparisons of CAPPCHA with existing flavors of CAPTCHA

    Understanding face and eye visibility in front-facing cameras of smartphones used in the wild

    Get PDF
    Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations
    • …
    corecore