62 research outputs found

    GTmoPass: Two-factor Authentication on Public Displays Using Gaze-touch Passwords and Personal Mobile Devices

    Get PDF
    As public displays continue to deliver increasingly private and personalized content, there is a need to ensure that only the legitimate users can access private information in sensitive contexts. While public displays can adopt similar authentication concepts like those used on public terminals (e.g., ATMs), authentication in public is subject to a number of risks. Namely, adversaries can uncover a user's password through (1) shoulder surfing, (2) thermal attacks, or (3) smudge attacks. To address this problem we propose GTmoPass, an authentication architecture that enables Multi-factor user authentication on public displays. The first factor is a knowledge-factor: we employ a shoulder-surfing resilient multimodal scheme that combines gaze and touch input for password entry. The second factor is a possession-factor: users utilize their personal mobile devices, on which they enter the password. Credentials are securely transmitted to a server via Bluetooth beacons. We describe the implementation of GTmoPass and report on an evaluation of its usability and security, which shows that although authentication using GTmoPass is slightly slower than traditional methods, it protects against the three aforementioned threats

    Implementation of Mouse Gesture Recognition

    Get PDF
    In this paper, we construct Authentication of automatic data processing system by Mouse Gestures was summarized and its significance towards its Methodologies was illustrated. Based on Neural Network formula and its analysis has been user to attain the Biometric Authentication based on user behavior on Neural Network and is additionally surveyed. Our This research paper conducts a review of the realm of Artificial Neural Network and biometric methods that add another more secure layer of security to computing system. DOI: 10.17762/ijritcc2321-8169.150519

    CueAuth:Comparing Touch, Mid-Air Gestures, and Gaze for Cue-based Authentication on Situated Displays

    Get PDF
    Secure authentication on situated displays (e.g., to access sensitive information or to make purchases) is becoming increasingly important. A promising approach to resist shoulder surfing attacks is to employ cues that users respond to while authenticating; this overwhelms observers by requiring them to observe both the cue itself as well as users’ response to the cue. Although previous work proposed a variety of modalities, such as gaze and mid-air gestures, to further improve security, an understanding of how they compare with regard to usability and security is still missing as of today. In this paper, we rigorously compare modalities for cue-based authentication on situated displays. In particular, we provide the first comparison between touch, mid-air gestures, and calibration-free gaze using a state-of-the-art authentication concept. In two in-depth user studies (N=37) we found that the choice of touch or gaze presents a clear trade-off between usability and security. For example, while gaze input is more secure, it is also more demanding and requires longer authentication times. Mid-air gestures are slightly slower and more secure than touch but users hesitate to use them in public. We conclude with three significant design implications for authentication using touch, mid-air gestures, and gaze and discuss how the choice of modality creates opportunities and challenges for improved authentication in public

    A Novel Taxonomy for Gestural Interaction Techniques Based on Accelerometers

    Get PDF
    Session: Multimodal interfacesInternational audienceA large variety of gestural interaction techniques based on accelerometers is now available. In this article, we propose a new taxonomic space as a systematic structure for supporting the comparative analysis of these techniques as well as for designing new ones. An interaction technique is plotted as a point in a space where the vertical axis denotes the semantic coverage of the techniques, and the horizontal axis expresses the physical actions users are engaged in, i.e. the lexicon. In addition, syntactic modifiers are used to express the interpretation process of input tokens into semantics, as well as pragmatic modifiers to make explicit the level of indirection between users' actions and system responses. To demonstrate the coverage of the taxonomy, we have classified 25 interaction techniques based on accelerometers. The analysis of the design space per se reveals directions for future research

    LightTouch: Securely Connecting Wearables to Ambient Displays with User Intent

    Get PDF
    Wearables are small and have limited user interfaces, so they often wirelessly interface with a personal smartphone/computer to relay information from the wearable for display or other interactions. In this paper, we envision a new method, LightTouch, by which a wearable can establish a secure connection to an ambient display, such as a television or a computer monitor, while ensuring the user\u27s intention to connect to the display. LightTouch uses standard RF methods (like Bluetooth) for communicating the data to display, securely bootstrapped via the visible-light communication (the brightness channel) from the display to the low-cost, low-power, ambient light sensor of a wearable. A screen `touch\u27 gesture is adopted by users to ensure that the modulation of screen brightness can be securely captured by the ambient light sensor with minimized noise. Wireless coordination with the processor driving the display establishes a shared secret based on the brightness channel information. We further propose novel on-screen localization and correlation algorithms to improve security and reliability. Through experiments and a preliminary user study we demonstrate that LightTouch is compatible with current display and wearable designs, is easy to use (about 6 seconds to connect), is reliable (up to 98\% success connection ratio), and is secure against attacks

    Gesture passwords: concepts, methods and challenges

    Full text link
    Biometrics are a convenient alternative to traditional forms of access control such as passwords and pass-cards since they rely solely on user-specific traits. Unlike alphanumeric passwords, biometrics cannot be given or told to another person, and unlike pass-cards, are always “on-hand.” Perhaps the most well-known biometrics with these properties are: face, speech, iris, and gait. This dissertation proposes a new biometric modality: gestures. A gesture is a short body motion that contains static anatomical information and changing behavioral (dynamic) information. This work considers both full-body gestures such as a large wave of the arms, and hand gestures such as a subtle curl of the fingers and palm. For access control, a specific gesture can be selected as a “password” and used for identification and authentication of a user. If this particular motion were somehow compromised, a user could readily select a new motion as a “password,” effectively changing and renewing the behavioral aspect of the biometric. This thesis describes a novel framework for acquiring, representing, and evaluating gesture passwords for the purpose of general access control. The framework uses depth sensors, such as the Kinect, to record gesture information from which depth maps or pose features are estimated. First, various distance measures, such as the log-euclidean distance between feature covariance matrices and distances based on feature sequence alignment via dynamic time warping, are used to compare two gestures, and train a classifier to either authenticate or identify a user. In authentication, this framework yields an equal error rate on the order of 1-2% for body and hand gestures in non-adversarial scenarios. Next, through a novel decomposition of gestures into posture, build, and dynamic components, the relative importance of each component is studied. The dynamic portion of a gesture is shown to have the largest impact on biometric performance with its removal causing a significant increase in error. In addition, the effects of two types of threats are investigated: one due to self-induced degradations (personal effects and the passage of time) and the other due to spoof attacks. For body gestures, both spoof attacks (with only the dynamic component) and self-induced degradations increase the equal error rate as expected. Further, the benefits of adding additional sensor viewpoints to this modality are empirically evaluated. Finally, a novel framework that leverages deep convolutional neural networks for learning a user-specific “style” representation from a set of known gestures is proposed and compared to a similar representation for gesture recognition. This deep convolutional neural network yields significantly improved performance over prior methods. A byproduct of this work is the creation and release of multiple publicly available, user-centric (as opposed to gesture-centric) datasets based on both body and hand gestures

    Applications of Context-Aware Systems in Enterprise Environments

    Get PDF
    In bring-your-own-device (BYOD) and corporate-owned, personally enabled (COPE) scenarios, employees’ devices store both enterprise and personal data, and have the ability to remotely access a secure enterprise network. While mobile devices enable users to access such resources in a pervasive manner, it also increases the risk of breaches for sensitive enterprise data as users may access the resources under insecure circumstances. That is, access authorizations may depend on the context in which the resources are accessed. In both scenarios, it is vital that the security of accessible enterprise content is preserved. In this work, we explore the use of contextual information to influence access control decisions within context-aware systems to ensure the security of sensitive enterprise data. We propose several context-aware systems that rely on a system of sensors in order to automatically adapt access to resources based on the security of users’ contexts. We investigate various types of mobile devices with varying embedded sensors, and leverage these technologies to extract contextual information from the environment. As a direct consequence, the technologies utilized determine the types of contextual access control policies that the context-aware systems are able to support and enforce. Specifically, the work proposes the use of devices pervaded in enterprise environments such as smartphones or WiFi access points to authenticate user positional information within indoor environments as well as user identities
    • …
    corecore