247 research outputs found

    Towards Baselines for Shoulder Surfing on Mobile Authentication

    Full text link
    Given the nature of mobile devices and unlock procedures, unlock authentication is a prime target for credential leaking via shoulder surfing, a form of an observation attack. While the research community has investigated solutions to minimize or prevent the threat of shoulder surfing, our understanding of how the attack performs on current systems is less well studied. In this paper, we describe a large online experiment (n=1173) that works towards establishing a baseline of shoulder surfing vulnerability for current unlock authentication systems. Using controlled video recordings of a victim entering in a set of 4- and 6-length PINs and Android unlock patterns on different phones from different angles, we asked participants to act as attackers, trying to determine the authentication input based on the observation. We find that 6-digit PINs are the most elusive attacking surface where a single observation leads to just 10.8% successful attacks, improving to 26.5\% with multiple observations. As a comparison, 6-length Android patterns, with one observation, suffered 64.2% attack rate and 79.9% with multiple observations. Removing feedback lines for patterns improves security from 35.3\% and 52.1\% for single and multiple observations, respectively. This evidence, as well as other results related to hand position, phone size, and observation angle, suggests the best and worst case scenarios related to shoulder surfing vulnerability which can both help inform users to improve their security choices, as well as establish baselines for researchers.Comment: Will appear in Annual Computer Security Applications Conference (ACSAC

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die FĂ€higkeit, GegenstĂ€nde mit unseren HĂ€nden zu greifen, erlaubt uns, diese vielfĂ€ltig zu manipulieren. Werkzeuge erweitern unsere FĂ€higkeiten noch, indem sie Genauigkeit, Kraft und Form unserer HĂ€nde an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder ComputermĂ€use, erlauben uns auch, die FĂ€higkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese GerĂ€te verfĂŒgen bereits ĂŒber Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr ĂŒber den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstĂŒtzen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut fĂŒr diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hĂ€lt und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick ĂŒber Grifferkennung (*grasp sensing*) fĂŒr die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher OberflĂ€chen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primĂ€re BeitrĂ€ge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick ĂŒber die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge fĂŒr griffempfindliche OberflĂ€chen und ein Framework fĂŒr Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick ĂŒber den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltĂ€glichen Situationen untersucht. Diese fanden eine deutlich grĂ¶ĂŸere DiversitĂ€t in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese DiversitĂ€t erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknĂŒpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick ĂŒber verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen OberflĂ€chen immer noch ein herausforderndes Problem ist, dass Forscher regelmĂ€ĂŸig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken fĂŒr griffempfindliche OberflĂ€chen entwickelt. Diese mindern jeweils eine oder mehrere SchwĂ€chen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die AnnĂ€herung an das Objekt zu erkennen. Außerdem muss nicht die komplette OberflĂ€che des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binĂ€ren Sensoren ist. **FlyEye** verwendet LichtwellenleiterbĂŒndel, die an eine Kamera angeschlossen werden, um AnnĂ€herung und BerĂŒhrung auf beliebig geformten OberflĂ€chen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berĂŒhrungs- und griffempfindlichen Objekten. FĂŒr FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die ĂŒblicherweise zur Analyse von KabelbeschĂ€digungen eingesetzt wird. TDRtouch erlaubt es, BerĂŒhrungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dĂŒnne und flexible griffempfindliche OberflĂ€chen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfĂŒllen und den *design space* fĂŒr griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele fĂŒr die Grifferkennung nutzen nur Daten der Griffsensoren und beschrĂ€nken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fĂŒnf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das VerhĂ€ltnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie OberflĂ€chenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache VorschlĂ€ge zur Entwicklung von zuverlĂ€ssiger und benutzbarer Griffinteraktion

    Machine learning techniques for implicit interaction using mobile sensors

    Get PDF
    Interactions in mobile devices normally happen in an explicit manner, which means that they are initiated by the users. Yet, users are typically unaware that they also interact implicitly with their devices. For instance, our hand pose changes naturally when we type text messages. Whilst the touchscreen captures finger touches, hand movements during this interaction however are unused. If this implicit hand movement is observed, it can be used as additional information to support or to enhance the users’ text entry experience. This thesis investigates how implicit sensing can be used to improve existing, standard interaction technique qualities. In particular, this thesis looks into enhancing front-of-device interaction through back-of-device and hand movement implicit sensing. We propose the investigation through machine learning techniques. We look into problems on how sensor data via implicit sensing can be used to predict a certain aspect of an interaction. For instance, one of the questions that this thesis attempts to answer is whether hand movement during a touch targeting task correlates with the touch position. This is a complex relationship to understand but can be best explained through machine learning. Using machine learning as a tool, such correlation can be measured, quantified, understood and used to make predictions on future touch position. Furthermore, this thesis also evaluates the predictive power of the sensor data. We show this through a number of studies. In Chapter 5 we show that probabilistic modelling of sensor inputs and recorded touch locations can be used to predict the general area of future touches on touchscreen. In Chapter 7, using SVM classifiers, we show that data from implicit sensing from general mobile interactions is user-specific. This can be used to identify users implicitly. In Chapter 6, we also show that touch interaction errors can be detected from sensor data. In our experiment, we show that there are sufficient distinguishable patterns between normal interaction signals and signals that are strongly correlated with interaction error. In all studies, we show that performance gain can be achieved by combining sensor inputs

    EdgeGlass: Exploring Tapping Performance on Smart Glasses while Sitting and Walking

    Get PDF
    Department of Human Factors EngineeringCurrently, smart glasses allow only touch sensing area which supports front mounted touch pads. However, touches on top, front and bottom sides of glass mounted touchpad is not yet explored. We made a customized touch sensor (length: 5-6 cm, height: 1 cm, width: 0.5 cm) featuring the sensing on its top, front, and bottom surfaces. For doing that, we have used capacitive touch sensing technology (MPR121 chips) with an electrode size of ~4.5 mm square, which is typical in the modern touchscreens. We have created a hardware system which consists of a total of 48 separate touch sensors. We investigated the interaction technique by it for both the sitting and walking situation, using a single finger sequential tapping and a pair finger simultaneous tapping. We have divided each side into three equal target areas and this separation made a total of 36 combinations. Our quantitative result showed that pair finger simultaneous tapping touches were faster, less error-prone in walking condition, compared to single finger sequential tapping into walking condition. Whereas, single finger sequence tapping touches were slower, but less error-prone in sitting condition, compared to pair simultaneous tapping in sitting condition. However, single finger sequential tapping touches were slower, much less error-prone in sitting condition compared to walking. Interestingly, double finger tapping touches had similar performance result in terms of both, error rate and completion time, in both sitting and walking conditions. Mental, physical, performance, effort did not have any effect on any temporal tapping???s and body poses experience of workload. In case of the parameter of temporal demand, for single finger sequential tapping mean temporal (time pressure) workload demand was higher than pair finger simultaneous tapping but body poses did not affect temporal (time pressure) workload for both of the sequential and simultaneous tapping type. In case of the parameter of frustration, the result suggested that mean frustration workload was higher for single finger sequential tapping experienced by the participants compared to pair finger simultaneous tapping and among body poses, walking experienced higher frustration mean workload than sitting. The subjective measure of overall workload during the performance study showed no significant difference between both independent variable: body pose (sitting and walking) and temporal tapping (single finger sequential tapping and pair finger simultaneous tapping).ope

    Sensor-based user interface concepts for continuous, around-device and gestural interaction on mobile devices

    Get PDF
    A generally observable trend of the past 10 years is that the amount of sensors embedded in mobile devices such as smart phones and tablets is rising steadily. Arguably, the available sensors are mostly underutilized by existing mobile user interfaces. In this dissertation, we explore sensor-based user interface concepts for mobile devices with the goal of making better use of the available sensing capabilities on mobile devices as well as gaining insights on the types of sensor technologies that could be added to future mobile devices. We are particularly interested how novel sensor technologies could be used to implement novel and engaging mobile user interface concepts. We explore three particular areas of interest for research into sensor-based user interface concepts for mobile devices: continuous interaction, around-device interaction and motion gestures. For continuous interaction, we explore the use of dynamic state-space systems to implement user interfaces based on a constant sensor data stream. In particular, we examine zoom automation in tilt-based map scrolling interfaces. We show that although fully automatic zooming is desirable in certain situations, adding a manual override capability of the zoom level (Semi-Automatic Zooming) will increase the usability of such a system, as shown through a decrease in task completion times and improved user ratings of user study. The presented work on continuous interaction also highlights how the sensors embedded in current mobile devices can be used to support complex interaction tasks. We go on to introduce the concept of Around-Device Interaction (ADI). By extending the interactive area of the mobile device to its entire surface and the physical volume surrounding it we aim to show how the expressivity and possibilities of mobile input can be improved this way. We derive a design space for ADI and evaluate three prototypes in this context. HoverFlow is a prototype allowing coarse hand gesture recognition around a mobile device using only a simple set of sensors. PalmSpace a prototype exploring the use of depth cameras on mobile devices to track the user's hands in direct manipulation interfaces through spatial gestures. Lastly, the iPhone Sandwich is a prototype supporting dual-sided pressure-sensitive multi-touch interaction. Through the results of user studies, we show that ADI can lead to improved usability for mobile user interfaces. Furthermore, the work on ADI contributes suggestions for the types of sensors could be incorporated in future mobile devices to expand the input capabilities of those devices. In order to broaden the scope of uses for mobile accelerometer and gyroscope data, we conducted research on motion gesture recognition. With the aim of supporting practitioners and researchers in integrating motion gestures into their user interfaces at early development stages, we developed two motion gesture recognition algorithms, the $3 Gesture Recognizer and Protractor 3D that are easy to incorporate into existing projects, have good recognition rates and require a low amount of training data. To exemplify an application area for motion gestures, we present the results of a study on the feasibility and usability of gesture-based authentication. With the goal of making it easier to connect meaningful functionality with gesture-based input, we developed Mayhem, a graphical end-user programming tool for users without prior programming skills. Mayhem can be used to for rapid prototyping of mobile gestural user interfaces. The main contribution of this dissertation is the development of a number of novel user interface concepts for sensor-based interaction. They will help developers of mobile user interfaces make better use of the existing sensory capabilities of mobile devices. Furthermore, manufacturers of mobile device hardware obtain suggestions for the types of novel sensor technologies that are needed in order to expand the input capabilities of mobile devices. This allows the implementation of future mobile user interfaces with increased input capabilities, more expressiveness and improved usability

    Invisible Shield: Gesture-Based Mobile Authentication

    Get PDF
    Intelligent mobile devices have become the focus of the electronics industry in recent years. These devices, e.g., smartphones and internet connected handheld devices, enable quick and efficient access of users to both business and personal data, but also allow the same data to be easily accessed by an intruder if the device is lost or stolen. Existing mobile security solutions attempt to solve this problem by forcing a user to authenticate to a device before being granted access to any data. However, such checks are often easily bypassed or hacked due to their simplistic nature. In this work, we demonstrate Invisible Shield, a gesture-based authentication scheme for mobile devices that is far more resilient to attack than existing security solutions and requires neither additional nor visible effort from user perspective. In this work, we design methods that efficiently record and preprocess gesture data. Two classification problems, "one vs. many" and "one vs. all," are then mathematically formulated and examined using the gesture data collected from 20 individuals. Classification algorithms specialized for each case are developed, achieving a classification accuracy as high as 90.7% in the former case, and an equal error rate as low as 7.7% in the latter using real Android systems. Finally, the system resource requirements of different classification algorithms are compared

    Behaviour-aware mobile touch interfaces

    Get PDF
    Mobile touch devices have become ubiquitous everyday tools for communication, information, as well as capturing, storing and accessing personal data. They are often seen as personal devices, linked to individual users, who access the digital part of their daily lives via hand-held touchscreens. This personal use and the importance of the touch interface motivate the main assertion of this thesis: Mobile touch interaction can be improved by enabling user interfaces to assess and take into account how the user performs these interactions. This thesis introduces the new term "behaviour-aware" to characterise such interfaces. These behaviour-aware interfaces aim to improve interaction by utilising behaviour data: Since users perform touch interactions for their main tasks anyway, inferring extra information from said touches may, for example, save users' time and reduce distraction, compared to explicitly asking them for this information (e.g. user identity, hand posture, further context). Behaviour-aware user interfaces may utilise this information in different ways, in particular to adapt to users and contexts. Important questions for this research thus concern understanding behaviour details and influences, modelling said behaviour, and inference and (re)action integrated into the user interface. In several studies covering both analyses of basic touch behaviour and a set of specific prototype applications, this thesis addresses these questions and explores three application areas and goals: 1) Enhancing input capabilities – by modelling users' individual touch targeting behaviour to correct future touches and increase touch accuracy. The research reveals challenges and opportunities of behaviour variability arising from factors including target location, size and shape, hand and finger, stylus use, mobility, and device size. The work further informs modelling and inference based on targeting data, and presents approaches for simulating touch targeting behaviour and detecting behaviour changes. 2) Facilitating privacy and security – by observing touch targeting and typing behaviour patterns to implicitly verify user identity or distinguish multiple users during use. The research shows and addresses mobile-specific challenges, in particular changing hand postures. It also reveals that touch targeting characteristics provide useful biometric value both in the lab as well as in everyday typing. Influences of common evaluation assumptions are assessed and discussed as well. 3) Increasing expressiveness – by enabling interfaces to pass on behaviour variability from input to output space, studied with a keyboard that dynamically alters the font based on current typing behaviour. Results show that with these fonts users can distinguish basic contexts as well as individuals. They also explicitly control font influences for personal communication with creative effects. This thesis further contributes concepts and implemented tools for collecting touch behaviour data, analysing and modelling touch behaviour, and creating behaviour-aware and adaptive mobile touch interfaces. Together, these contributions support researchers and developers in investigating and building such user interfaces. Overall, this research shows how variability in mobile touch behaviour can be addressed and exploited for the benefit of the users. The thesis further discusses opportunities for transfer and reuse of touch behaviour models and information across applications and devices, for example to address tradeoffs of privacy/security and usability. Finally, the work concludes by reflecting on the general role of behaviour-aware user interfaces, proposing to view them as a way of embedding expectations about user input into interactive artefacts

    Integrated Framework Design for Intelligent Human Machine Interaction

    Get PDF
    Human-computer interaction, sometimes referred to as Man-Machine Interaction, is a concept that emerged simultaneously with computers, or more generally machines. The methods by which humans have been interacting with computers have traveled a long way. New designs and technologies appear every day. However, computer systems and complex machines are often only technically successful, and most of the time users may find them confusing to use; thus, such systems are never used efficiently. Therefore, building sophisticated machines and robots is not the only thing someone has to address; in fact, more effort should be put to make these machines simpler for all kind of users, and generic enough to accommodate different types of environments. Thus, designing intelligent human computer interaction modules come to emerge. In this work, we aim to implement a generic framework (referred to as CIMF framework) that allows the user to control the synchronized and coordinated cooperative type of work that a set of robots can perform. Three robots are involved so far: Two manipulators and one mobile robot. The framework should be generic enough to be hardware independent and to allow the easy integration of new entities and modules. We also aim to implement the different building blocks for the intelligent manufacturing cell that communicates with the framework via the most intelligent and advanced human computer interaction techniques. Three techniques shall be addressed: Interface-, audio-, and visual-based type of interaction

    Enhancing Usability and Security through Alternative Authentication Methods

    Get PDF
    With the expanding popularity of various Internet services, online users have be- come more vulnerable to malicious attacks as more of their private information is accessible on the Internet. The primary defense protecting private information is user authentication, which currently relies on less than ideal methods such as text passwords and PIN numbers. Alternative methods such as graphical passwords and behavioral biometrics have been proposed, but with too many limitations to replace current methods. However, with enhancements to overcome these limitations and harden existing methods, alternative authentications may become viable for future use. This dissertation aims to enhance the viability of alternative authentication systems. In particular, our research focuses on graphical passwords, biometrics that depend, directly or indirectly, on anthropometric data, and user authentication en- hancements using touch screen features on mobile devices. In the study of graphical passwords, we develop a new cued-recall graphical pass- word system called GridMap by exploring (1) the use of grids with variable input entered through the keyboard, and (2) the use of maps as background images. as a result, GridMap is able to achieve high key space and resistance to shoulder surfing attacks. to validate the efficacy of GridMap in practice, we conduct a user study with 50 participants. Our experimental results show that GridMap works well in domains in which a user logs in on a regular basis, and provides a memorability benefit if the chosen map has a personal significance to the user. In the study of anthropometric based biometrics through the use of mouse dy- namics, we present a method for choosing metrics based on empirical evidence of natural difference in the genders. In particular, we develop a novel gender classifi- cation model and evaluate the model’s accuracy based on the data collected from a group of 94 users. Temporal, spatial, and accuracy metrics are recorded from kine- matic and spatial analyses of 256 mouse movements performed by each user. The effectiveness of our model is validated through the use of binary logistic regressions. Finally, we propose enhanced authentication schemes through redesigned input, along with the use of anthropometric biometrics on mobile devices. We design a novel scheme called Triple Touch PIN (TTP) that improves traditional PIN number based authentication with highly enlarged keyspace. We evaluate TTP on a group of 25 participants. Our evaluation results show that TTP is robust against dictio- nary attacks and achieves usability at acceptable levels for users. We also assess anthropometric based biometrics by attempting to differentiate user fingers through the readings of the sensors in the touch screen. We validate the viability of this biometric approach on 33 users, and observe that it is feasible for distinguishing the fingers with the largest anthropometric differences, the thumb and pinkie fingers
    • 

    corecore