23,004 research outputs found
Active User Authentication for Smartphones: A Challenge Data Set and Benchmark Results
In this paper, automated user verification techniques for smartphones are
investigated. A unique non-commercial dataset, the University of Maryland
Active Authentication Dataset 02 (UMDAA-02) for multi-modal user authentication
research is introduced. This paper focuses on three sensors - front camera,
touch sensor and location service while providing a general description for
other modalities. Benchmark results for face detection, face verification,
touch-based user identification and location-based next-place prediction are
presented, which indicate that more robust methods fine-tuned to the mobile
platform are needed to achieve satisfactory verification accuracy. The dataset
will be made available to the research community for promoting additional
research.Comment: 8 pages, 12 figures, 6 tables. Best poster award at BTAS 201
Translating Video Recordings of Mobile App Usages into Replayable Scenarios
Screen recordings of mobile applications are easy to obtain and capture a
wealth of information pertinent to software developers (e.g., bugs or feature
requests), making them a popular mechanism for crowdsourced app feedback. Thus,
these videos are becoming a common artifact that developers must manage. In
light of unique mobile development constraints, including swift release cycles
and rapidly evolving platforms, automated techniques for analyzing all types of
rich software artifacts provide benefit to mobile developers. Unfortunately,
automatically analyzing screen recordings presents serious challenges, due to
their graphical nature, compared to other types of (textual) artifacts. To
address these challenges, this paper introduces V2S, a lightweight, automated
approach for translating video recordings of Android app usages into replayable
scenarios. V2S is based primarily on computer vision techniques and adapts
recent solutions for object detection and image classification to detect and
classify user actions captured in a video, and convert these into a replayable
test scenario. We performed an extensive evaluation of V2S involving 175 videos
depicting 3,534 GUI-based actions collected from users exercising features and
reproducing bugs from over 80 popular Android apps. Our results illustrate that
V2S can accurately replay scenarios from screen recordings, and is capable of
reproducing 89% of our collected videos with minimal overhead. A case
study with three industrial partners illustrates the potential usefulness of
V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software
Engineering (ICSE'20), 13 page
Touchalytics: On the Applicability of Touchscreen Input as a Behavioral Biometric for Continuous Authentication
We investigate whether a classifier can continuously authenticate users based
on the way they interact with the touchscreen of a smart phone. We propose a
set of 30 behavioral touch features that can be extracted from raw touchscreen
logs and demonstrate that different users populate distinct subspaces of this
feature space. In a systematic experiment designed to test how this behavioral
pattern exhibits consistency over time, we collected touch data from users
interacting with a smart phone using basic navigation maneuvers, i.e., up-down
and left-right scrolling. We propose a classification framework that learns the
touch behavior of a user during an enrollment phase and is able to accept or
reject the current user by monitoring interaction with the touch screen. The
classifier achieves a median equal error rate of 0% for intra-session
authentication, 2%-3% for inter-session authentication and below 4% when the
authentication test was carried out one week after the enrollment phase. While
our experimental findings disqualify this method as a standalone authentication
mechanism for long-term authentication, it could be implemented as a means to
extend screen-lock time or as a part of a multi-modal biometric authentication
system.Comment: to appear at IEEE Transactions on Information Forensics & Security;
Download data from http://www.mariofrank.net/touchalytics
Keystroke dynamics in the pre-touchscreen era
Biometric authentication seeks to measure an individualās unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individualsā typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts
- ā¦