109 research outputs found

    GazeTouchPIN: Protecting Sensitive Data on Mobile Devices Using Secure Multimodal Authentication

    Get PDF
    Although mobile devices provide access to a plethora of sensitive data, most users still only protect them with PINs or patterns, which are vulnerable to side-channel attacks (e.g., shoulder surfing). How-ever, prior research has shown that privacy-aware users are willing to take further steps to protect their private data. We propose GazeTouchPIN, a novel secure authentication scheme for mobile devices that combines gaze and touch input. Our multimodal approach complicates shoulder-surfing attacks by requiring attackers to ob-serve the screen as well as the user’s eyes to and the password. We evaluate the security and usability of GazeTouchPIN in two user studies (N=30). We found that while GazeTouchPIN requires longer entry times, privacy aware users would use it on-demand when feeling observed or when accessing sensitive data. The results show that successful shoulder surfing attack rate drops from 68% to 10.4%when using GazeTouchPIN

    They Are All After You: Investigating the Viability of a Threat Model That Involves Multiple Shoulder Surfers

    Get PDF
    Many of the authentication schemes for mobile devices that were proposed lately complicate shoulder surfing by splitting the attacker's attention into two or more entities. For example, multimodal authentication schemes such as GazeTouchPIN and GazeTouchPass require attackers to observe the user's gaze input and the touch input performed on the phone's screen. These schemes have always been evaluated against single observers, while multiple observers could potentially attack these schemes with greater ease, since each of them can focus exclusively on one part of the password. In this work, we study the effectiveness of a novel threat model against authentication schemes that split the attacker's attention. As a case study, we report on a security evaluation of two state of the art authentication schemes in the case of a team of two observers. Our results show that although multiple observers perform better against these schemes than single observers, multimodal schemes are significantly more secure against multiple observers compared to schemes that employ a single modality. We discuss how this threat model impacts the design of authentication schemes

    A Shoulder-Surfing Resistant Scheme Embedded in Traditional Passwords

    Get PDF
    Typing passwords is vulnerable to shoulder-surfing attacks. We proposed a shoulder-surfing resistant scheme embedded in traditional textual passwords in this study. With the proposed scheme, when the password field is on focus, a pattern appears in it as a hint to tell the user how to enter a password. Following the hint, the user needs to skip some characters while typing the password. The characters to be skipped are randomly selected so that an observer will not be able to see the whole password even if the authentication procedure was recorded. We evaluated the proposed scheme in a usability study. Compared to traditional passwords, our scheme achieved a similar level of accuracy while only required marginal additional time to authenticate users. Participants also expressed significantly higher acceptance of the new technique for security-sensitive applications and gave it significantly higher ratings in perceived security, shoulders-surfing resistance, camera-recording resistance, and guess-attack resistance

    GazeLockPatterns: Comparing Authentication Using Gaze and Touch for Entering Lock Patterns

    Get PDF
    In this work, we present a comparison between Android’s lock patterns for mobile devices (TouchLockPatterns) and an implementation of lock patterns that uses gaze input (GazeLockPatterns). We report on results of a between subjects study (N=40) to show that for the same layout of authentication interface, people employ comparable strategies for pattern composition. We discuss the pros and cons of adapting lock patterns to gaze-based user interfaces. We conclude by opportunities for future work, such as using data collected during authentication for calibrating eye trackers

    The Role of Eye Gaze in Security and Privacy Applications: Survey and Future HCI Research Directions

    Get PDF
    For the past 20 years, researchers have investigated the use of eye tracking in security applications. We present a holistic view on gaze-based security applications. In particular, we canvassed the literature and classify the utility of gaze in security applications into a) authentication, b) privacy protection, and c) gaze monitoring during security critical tasks. This allows us to chart several research directions, most importantly 1) conducting field studies of implicit and explicit gaze-based authentication due to recent advances in eye tracking, 2) research on gaze-based privacy protection and gaze monitoring in security critical tasks which are under-investigated yet very promising areas, and 3) understanding the privacy implications of pervasive eye tracking. We discuss the most promising opportunities and most pressing challenges of eye tracking for security that will shape research in gaze-based security applications for the next decade

    CueAuth:Comparing Touch, Mid-Air Gestures, and Gaze for Cue-based Authentication on Situated Displays

    Get PDF
    Secure authentication on situated displays (e.g., to access sensitive information or to make purchases) is becoming increasingly important. A promising approach to resist shoulder surfing attacks is to employ cues that users respond to while authenticating; this overwhelms observers by requiring them to observe both the cue itself as well as users’ response to the cue. Although previous work proposed a variety of modalities, such as gaze and mid-air gestures, to further improve security, an understanding of how they compare with regard to usability and security is still missing as of today. In this paper, we rigorously compare modalities for cue-based authentication on situated displays. In particular, we provide the first comparison between touch, mid-air gestures, and calibration-free gaze using a state-of-the-art authentication concept. In two in-depth user studies (N=37) we found that the choice of touch or gaze presents a clear trade-off between usability and security. For example, while gaze input is more secure, it is also more demanding and requires longer authentication times. Mid-air gestures are slightly slower and more secure than touch but users hesitate to use them in public. We conclude with three significant design implications for authentication using touch, mid-air gestures, and gaze and discuss how the choice of modality creates opportunities and challenges for improved authentication in public

    Designing Usable and Secure Authentication Mechanisms for Public Spaces

    Get PDF
    Usable and secure authentication is a research field that approaches different challenges related to authentication, including security, from a human-computer interaction perspective. That is, work in this field tries to overcome security, memorability and performance problems that are related to the interaction with an authentication mechanism. More and more services that require authentication, like ticket vending machines or automated teller machines (ATMs), take place in a public setting, in which security threats are more inherent than in other settings. In this work, we approach the problem of usable and secure authentication for public spaces. The key result of the work reported here is a set of well-founded criteria for the systematic evaluation of authentication mechanisms. These criteria are justified by two different types of investigation, which are on the one hand prototypical examples of authentication mechanisms with improved usability and security, and on the other hand empirical studies of security-related behavior in public spaces. So this work can be structured in three steps: Firstly, we present five authentication mechanisms that were designed to overcome the main weaknesses of related work which we identified using a newly created categorization of authentication mechanisms for public spaces. The systems were evaluated in detail and showed encouraging results for future use. This and the negative sides and problems that we encountered with these systems helped us to gain diverse insights on the design and evaluation process of such systems in general. It showed that the development process of authentication mechanisms for public spaces needs to be improved to create better results. Along with this, it provided insights on why related work is difficult to compare to each other. Keeping this in mind, first criteria were identified that can fill these holes and improve design and evaluation of authentication mechanisms, with a focus on the public setting. Furthermore, a series of work was performed to gain insights on factors influencing the quality of authentication mechanisms and to define a catalog of criteria that can be used to support creating such systems. It includes a long-term study of different PIN-entry systems as well as two field studies and field interviews on real world ATM-use. With this, we could refine the previous criteria and define additional criteria, many of them related to human factors. For instance, we showed that social issues, like trust, can highly affect the security of an authentication mechanism. We used these results to define a catalog of seven criteria. Besides their definition, we provide information on how applying them influences the design, implementation and evaluation of a the development process, and more specifically, how adherence improves authentication in general. A comparison of two authentication mechanisms for public spaces shows that a system that fulfills the criteria outperforms a system with less compliance. We could also show that compliance not only improves the authentication mechanisms themselves, it also allows for detailed comparisons between different systems

    Addressing Situational and Physical Impairments and Disabilities with a Gaze-Assisted, Multi-Modal, Accessible Interaction Paradigm

    Get PDF
    Every day we encounter a variety of scenarios that lead to situationally induced impairments and disabilities, i.e., our hands are assumed to be engaged in a task, and hence unavailable for interacting with a computing device. For example, a surgeon performing an operation, a worker in a factory with greasy hands or wearing thick gloves, a person driving a car, and so on all represent scenarios of situational impairments and disabilities. In such cases, performing point-and-click interactions, text entry, or authentication on a computer using conventional input methods like the mouse, keyboard, and touch is either inefficient or not possible. Unfortunately, individuals with physical impairments and disabilities, by birth or due to an injury, are forced to deal with these limitations every single day. Generally, these individuals experience difficulty or are completely unable to perform basic operations on a computer. Therefore, to address situational and physical impairments and disabilities it is crucial to develop hands-free, accessible interactions. In this research, we try to address the limitations, inabilities, and challenges arising from situational and physical impairments and disabilities by developing a gaze-assisted, multi-modal, hands-free, accessible interaction paradigm. Specifically, we focus on the three primary interactions: 1) point-and-click, 2) text entry, and 3) authentication. We present multiple ways in which the gaze input can be modeled and combined with other input modalities to enable efficient and accessible interactions. In this regard, we have developed a gaze and foot-based interaction framework to achieve accurate “point-and-click" interactions and to perform dwell-free text entry on computers. In addition, we have developed a gaze gesture-based framework for user authentication and to interact with a wide range of computer applications using a common repository of gaze gestures. The interaction methods and devices we have developed are a) evaluated using the standard HCI procedures like the Fitts’ Law, text entry metrics, authentication accuracy and video analysis attacks, b) compared against the speed, accuracy, and usability of other gaze-assisted interaction methods, and c) qualitatively analyzed by conducting user interviews. From the evaluations, we found that our solutions achieve higher efficiency than the existing systems and also address the usability issues. To discuss each of these solutions, first, the gaze and foot-based system we developed supports point-and-click interactions to address the “Midas Touch" issue. The system performs at least as good (time and precision) as the mouse, while enabling hands-free interactions. We have also investigated the feasibility, advantages, and challenges of using gaze and foot-based point-and-click interactions on standard (up to 24") and large displays (up to 84") through Fitts’ Law evaluations. Additionally, we have compared the performance of the gaze input to the other standard inputs like the mouse and touch. Second, to support text entry, we developed a gaze and foot-based dwell-free typing system, and investigated foot-based activation methods like foot-press and foot gestures. We have demonstrated that our dwell-free typing methods are efficient and highly preferred over conventional dwell-based gaze typing methods. Using our gaze typing system the users type up to 14.98 Words Per Minute (WPM) as opposed to 11.65 WPM with dwell-based typing. Importantly, our system addresses the critical usability issues associated with gaze typing in general. Third, we addressed the lack of an accessible and shoulder-surfing resistant authentication method by developing a gaze gesture recognition framework, and presenting two authentication strategies that use gaze gestures. Our authentication methods use static and dynamic transitions of the objects on the screen, and they authenticate users with an accuracy of 99% (static) and 97.5% (dynamic). Furthermore, unlike other systems, our dynamic authentication method is not susceptible to single video iterative attacks, and has a lower success rate with dual video iterative attacks. Lastly, we demonstrated how our gaze gesture recognition framework can be extended to allow users to design gaze gestures of their choice and associate them to appropriate commands like minimize, maximize, scroll, etc., on the computer. We presented a template matching algorithm which achieved an accuracy of 93%, and a geometric feature-based decision tree algorithm which achieved an accuracy of 90.2% in recognizing the gaze gestures. In summary, our research demonstrates how situational and physical impairments and disabilities can be addressed with a gaze-assisted, multi-modal, accessible interaction paradigm

    RepliCueAuth: Validating the Use of a lab-based Virtual Reality Setup for Evaluating Authentication System

    Get PDF
    Evaluating novel authentication systems is often costly and time-consuming. In this work, we assess the suitability of using Virtual Reality (VR) to evaluate the usability and security of real-world authentication systems. To this end, we conducted a replication study and built a virtual replica of CueAuth [52], a recently introduced authentication scheme, and report on results from: (1) a lab-based in-VR usability study (N=20) evaluating user performance; (2) an online security study (N=22) evaluating system’s observation resistance through virtual avatars; and (3) a comparison between our results and those previously reported in the real-world evaluation. Our analysis indicates that VR can serve as a suitable test-bed for human-centred evaluations of real-world authentication schemes, but the used VR technology can have an impact on the evaluation. Our work is a first step towards augmenting the design and evaluation spectrum of authentication systems and offers ground work for more research to follow
    • 

    corecore