53 research outputs found

    Keystroke Inference Using Smartphone Kinematics

    Get PDF
    The use of smartphones is becoming ubiquitous in modern society, these very personal devices store large amounts of personal information and we use these devices to access everything from our bank to our social networks, we communicate using these devices in both open one-to-many communications and in more closed, private one-to-one communications. In this paper we have created a method to infer what is typed on a device purely from how the device moves in the user’s hand. With very small amounts of training data (less than the size of a tweet) we are able to predict the text typed on a device with accuracies of up to 90%. We found no effect on this accuracy from how fast users type, how comfortable they are using smartphone keyboards or how the device was held in the hand. It is trivial to create an application that can access the motion data of a phone whilst a user is engaged in other applications, the accessing of motion data does not require any permission to be granted by the user and hence represents a tangible threat to smartphone users

    Reconstructing what you said: Text Inference using Smartphone Motion

    Get PDF
    Smartphones and tablets are becoming ubiquitous within our connected lives and as a result these devices are increasingly being used for more and more sensitive applications, such as banking. The security of the information within these sensitive applications is managed through a variety of different processes, all of which minimise the exposure of this sensitive information to other potentially malicious applications. This paper documents experiments with the 'zero-permission' motion sensors on the device as a side-channel for inferring the text typed into a sensitive application. These sensors are freely accessible without the phone user having to give permission. The research was able to, on average, identify nearly 30 percent of typed bigrams from unseen words, using a very small volume of training data, which was less than the size of a tweet. Given the natural redundancy in language this performance is often enough to understand the phrase being typed. We found that large devices were typically more vulnerable, as were users who held the device in one hand whilst typing with fingers. Of those bigrams which were not correctly identified over 60 percent of the errors involved the space bar and nearly half of the errors are within two keys on the keyboard

    Exploiting behavioral biometrics for user security enhancements

    Get PDF
    As online business has been very popular in the past decade, the tasks of providing user authentication and verification have become more important than before to protect user sensitive information from malicious hands. The most common approach to user authentication and verification is the use of password. However, the dilemma users facing in traditional passwords becomes more and more evident: users tend to choose easy-to-remember passwords, which are often weak passwords that are easy to crack. Meanwhile, behavioral biometrics have promising potentials in meeting both security and usability demands, since they authenticate users by who you are , instead of what you have . In this dissertation, we first develop two such user verification applications based on behavioral biometrics: the first one is via mouse movements, and the second via tapping behaviors on smartphones; then we focus on modeling user web browsing behaviors by Fitts\u27 Law.;Specifically, we develop a user verification system by exploiting the uniqueness of people\u27s mouse movements. The key feature of our system lies in using much more fine-grained (point-by-point) angle-based metrics of mouse movements for user verification. These new metrics are relatively unique from person to person and independent of the computing platform. We conduct a series of experiments to show that the proposed system can verify a user in an accurate and timely manner, and induced system overhead is minor. Similar to mouse movements, the tapping behaviors of smartphone users on touchscreen also vary from person to person. We propose a non-intrusive user verification mechanism to substantiate whether an authenticating user is the true owner of the smartphone or an impostor who happens to know the passcode. The effectiveness of the proposed approach is validated through real experiments. to further understand user pointing behaviors, we attempt to stress-test Fitts\u27 law in the wild , namely, under natural web browsing environments, instead of restricted laboratory settings in previous studies. Our analysis shows that, while the averaged pointing times follow Fitts\u27 law very well, there is considerable deviations from Fitts\u27 law. We observe that, in natural browsing, a fast movement has a different error model from the other two movements. Therefore, a complete profiling on user pointing performance should be done in more details, for example, constructing different error models for slow and fast movements. as future works, we plan to exploit multiple-finger tappings for smartphone user verification, and evaluate user privacy issues in Amazon wish list

    Personalized Longitudinal Assessment of Multiple Sclerosis Using Smartphones

    Full text link
    Personalized longitudinal disease assessment is central to quickly diagnosing, appropriately managing, and optimally adapting the therapeutic strategy of multiple sclerosis (MS). It is also important for identifying the idiosyncratic subject-specific disease profiles. Here, we design a novel longitudinal model to map individual disease trajectories in an automated way using sensor data that may contain missing values. First, we collect digital measurements related to gait and balance, and upper extremity functions using sensor-based assessments administered on a smartphone. Next, we treat missing data via imputation. We then discover potential markers of MS by employing a generalized estimation equation. Subsequently, parameters learned from multiple training datasets are ensembled to form a simple, unified longitudinal predictive model to forecast MS over time in previously unseen people with MS. To mitigate potential underestimation for individuals with severe disease scores, the final model incorporates additional subject-specific fine-tuning using data from the first day. The results show that the proposed model is promising to achieve personalized longitudinal MS assessment; they also suggest that features related to gait and balance as well as upper extremity function, remotely collected from sensor-based assessments, may be useful digital markers for predicting MS over time

    USER AUTHENTICATION ACROSS DEVICES, MODALITIES AND REPRESENTATION: BEHAVIORAL BIOMETRIC METHODS

    Get PDF
    Biometrics eliminate the need for a person to remember and reproduce complex secretive information or carry additional hardware in order to authenticate oneself. Behavioral biometrics is a branch of biometrics that focuses on using a person’s behavior or way of doing a task as means of authentication. These tasks can be any common, day to day tasks like walking, sleeping, talking, typing and so on. As interactions with computers and other smart-devices like phones and tablets have become an essential part of modern life, a person’s style of interaction with them can be used as a powerful means of behavioral biometrics. In this dissertation, we present insights from the analysis of our proposed set of contextsensitive or word-specific keystroke features on desktop, tablet and phone. We show that the conventional features are not highly discriminatory on desktops and are only marginally better on hand-held devices for user identification. By using information of the context, our proposed word-specific features offer superior discrimination among users on all devices. Classifiers, built using our proposed features, perform user identification with high accuracies in range of 90% to 97%, average precision and recall values of 0.914 and 0.901 respectively. Analysis of the word-based impact factors reveal that four or five character words, words with about 50% vowels, and those that are ranked higher on the frequency lists might give better results for the extraction and use of the proposed features for user identification. We also examine a large umbrella of behavioral biometric data such as; keystroke latencies, gait and swipe data on desktop, phone and tablet for the assumption of an underlying normal distribution, which is common in many research works. Using suitable nonparametric normality tests (Lilliefors test and Shapiro-Wilk test) we show that a majority of the features from all activities and all devices, do not follow a normal distribution. In most cases less than 25% of the samples that were tested had p values \u3e 0.05. We discuss alternate solutions to address the non-normality in behavioral biometric data. Openly available datasets did not provide the wide range of modalities and activities required for our research. Therefore, we have collected and shared an open access, large benchmark dataset for behavioral biometrics on IEEEDataport. We describe the collection and analysis of our Syracuse University and Assured Information Security - Behavioral Biometrics Multi-device and multi -Activity data from Same users (SU-AIS BB-MAS) Dataset. Which is an open access dataset on IEEEdataport, with data from 117 subjects for typing (both fixed and free text), gait (walking, upstairs and downstairs) and touch on Desktop, Tablet and Phone. The dataset consists a total of about: 3.5 million keystroke events; 57.1 million data-points for accelerometer and gyroscope each; 1.7 million datapoints for swipes and is listed as one of the most popular datasets on the portal (through IEEE emails to all members on 05/13/2020 and 07/21/2020). We also show that keystroke dynamics (KD) on a desktop can be used to classify the type of activity, either benign or adversarial, that a text sample originates from. We show the inefficiencies of popular temporal features for this task. With our proposed set of 14 features we achieve high accuracies (93% to 97%) and low Type 1 and Type 2 errors (3% to 8%) in classifying text samples of different sizes. We also present exploratory research in (a) authenticating users through musical notes generated by mapping their keystroke latencies to music and (b) authenticating users through the relationship between their keystroke latencies on multiple devices

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    Efficient PID Controller based Hexapod Wall Following Robot

    Get PDF
    This paper presents a design of wall following behaviour for hexapod robot based on PID controller. PID controller is proposed here because of its ability to control many cases of non-linear systems. In this case, we proposed a PID controller to improve the speed and stability of hexapod robot movement while following the wall. In this paper, PID controller is used to control the robot legs, by adjusting the value of swing angle during forward or backward movement to maintain the distance between the robot and the wall. The experimental result was verified by implementing the proposed control method into actual prototype of hexapod robot

    Data-Driven Evaluation of In-Vehicle Information Systems

    Get PDF
    Today’s In-Vehicle Information Systems (IVISs) are featurerich systems that provide the driver with numerous options for entertainment, information, comfort, and communication. Drivers can stream their favorite songs, read reviews of nearby restaurants, or change the ambient lighting to their liking. To do so, they interact with large center stack touchscreens that have become the main interface between the driver and IVISs. To interact with these systems, drivers must take their eyes off the road which can impair their driving performance. This makes IVIS evaluation critical not only to meet customer needs but also to ensure road safety. The growing number of features, the distraction caused by large touchscreens, and the impact of driving automation on driver behavior pose significant challenges for the design and evaluation of IVISs. Traditionally, IVISs are evaluated qualitatively or through small-scale user studies using driving simulators. However, these methods are not scalable to the growing number of features and the variety of driving scenarios that influence driver interaction behavior. We argue that data-driven methods can be a viable solution to these challenges and can assist automotive User Experience (UX) experts in evaluating IVISs. Therefore, we need to understand how data-driven methods can facilitate the design and evaluation of IVISs, how large amounts of usage data need to be visualized, and how drivers allocate their visual attention when interacting with center stack touchscreens. In Part I, we present the results of two empirical studies and create a comprehensive understanding of the role that data-driven methods currently play in the automotive UX design process. We found that automotive UX experts face two main conflicts: First, results from qualitative or small-scale empirical studies are often not valued in the decision-making process. Second, UX experts often do not have access to customer data and lack the means and tools to analyze it appropriately. As a result, design decisions are often not user-centered and are based on subjective judgments rather than evidence-based customer insights. Our results show that automotive UX experts need data-driven methods that leverage large amounts of telematics data collected from customer vehicles. They need tools to help them visualize and analyze customer usage data and computational methods to automatically evaluate IVIS designs. In Part II, we present ICEBOAT, an interactive user behavior analysis tool for automotive user interfaces. ICEBOAT processes interaction data, driving data, and glance data, collected over-the-air from customer vehicles and visualizes it on different levels of granularity. Leveraging our multi-level user behavior analysis framework, it enables UX experts to effectively and efficiently evaluate driver interactions with touchscreen-based IVISs concerning performance and safety-related metrics. In Part III, we investigate drivers’ multitasking behavior and visual attention allocation when interacting with center stack touchscreens while driving. We present the first naturalistic driving study to assess drivers’ tactical and operational self-regulation with center stack touchscreens. Our results show significant differences in drivers’ interaction and glance behavior in response to different levels of driving automation, vehicle speed, and road curvature. During automated driving, drivers perform more interactions per touchscreen sequence and increase the time spent looking at the center stack touchscreen. These results emphasize the importance of context-dependent driver distraction assessment of driver interactions with IVISs. Motivated by this we present a machine learning-based approach to predict and explain the visual demand of in-vehicle touchscreen interactions based on customer data. By predicting the visual demand of yet unseen touchscreen interactions, our method lays the foundation for automated data-driven evaluation of early-stage IVIS prototypes. The local and global explanations provide additional insights into how design artifacts and driving context affect drivers’ glance behavior. Overall, this thesis identifies current shortcomings in the evaluation of IVISs and proposes novel solutions based on visual analytics and statistical and computational modeling that generate insights into driver interaction behavior and assist UX experts in making user-centered design decisions
    corecore