218 research outputs found

    SAW: Wristband-Based Authentication for Desktop Computers

    Get PDF
    Token-based proximity authentication methods that authenticate users based on physical proximity are effortless, but lack explicit user intentionality, which may result in accidental logins. For example, a user may get logged in when she is near a computer or just passing by, even if she does not intend to use that computer. Lack of user intentionality in proximity-based methods makes them less suitable for multi-user shared computer environments, despite their desired usability benefits over passwords. \par We present an authentication method for desktops called Seamless Authentication using Wristbands (SAW), which addresses the lack of intentionality limitation of proximity-based methods. SAW uses a low-effort user input step for explicitly conveying user intentionality, while keeping the overall usability of the method better than password-based methods. In SAW, a user wears a wristband that acts as the user\u27s identity token, and to authenticate to a desktop, the user provides a low-effort input by tapping a key on the keyboard multiple times or wiggling the mouse with the wristband hand. This input to the desktop conveys that someone wishes to log in to the desktop, and SAW verifies the user who wishes to log in by confirming the user\u27s proximity and correlating the received keyboard or mouse inputs with the user\u27s wrist movement, as measured by the wristband. In our feasibility user study (n=17), SAW proved quick to authenticate (within two seconds), with a low false-negative rate of 2.5% and worst-case false-positive rate of 1.8%. In our user perception study (n=16), a majority of the participants rated it as more usable than passwords

    Interaction Methods for Smart Glasses : A Survey

    Get PDF
    Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. Finally, we discuss several key design challenges and the possibility of multi-modal input for smart glasses.Peer reviewe

    Typing on Any Surface: A Deep Learning-based Method for Real-Time Keystroke Detection in Augmented Reality

    Full text link
    Frustrating text entry interface has been a major obstacle in participating in social activities in augmented reality (AR). Popular options, such as mid-air keyboard interface, wireless keyboards or voice input, either suffer from poor ergonomic design, limited accuracy, or are simply embarrassing to use in public. This paper proposes and validates a deep-learning based approach, that enables AR applications to accurately predict keystrokes from the user perspective RGB video stream that can be captured by any AR headset. This enables a user to perform typing activities on any flat surface and eliminates the need of a physical or virtual keyboard. A two-stage model, combing an off-the-shelf hand landmark extractor and a novel adaptive Convolutional Recurrent Neural Network (C-RNN), was trained using our newly built dataset. The final model was capable of adaptive processing user-perspective video streams at ~32 FPS. This base model achieved an overall accuracy of 91.05%91.05\% when typing 40 Words per Minute (wpm), which is how fast an average person types with two hands on a physical keyboard. The Normalised Levenshtein Distance also further confirmed the real-world applicability of that our approach. The promising results highlight the viability of our approach and the potential for our method to be integrated into various applications. We also discussed the limitations and future research required to bring such technique into a production system

    Using wrist vibrations to guide hand movement and whole body navigation

    Get PDF
    International audienceIn the absence of vision, mobility and orientation are challenging. Audio and tactile feedback can be used to guide visually impaired people. In this paper, we present two complementary studies on the use of vibrational cues for hand guidance during the exploration of itineraries on a map, and whole body-guidance in a virtual environment. Concretely, we designed wearable Arduino bracelets integrating a vibratory motor producing multiple patterns of pulses. In a first study, this bracelet was used for guiding the hand along unknown routes on an interactive tactile map. A wizard-of-Oz study with six blindfolded participants showed that tactons, vibrational patterns, may be more efficient than audio cues for indicating directions. In a second study, this bracelet was used by blindfolded participants to navigate in a virtual environment. The results presented here show that it is possible to significantly decrease travel distance with vibrational cues. To sum up, these preliminary but complementary studies suggest the interest of vibrational feedback in assistive technology for mobility and orientation for blind people

    Enhancing Virtual Reality Systems with Smart Wearable Devices

    Get PDF
    The proliferation of wearable and smartphone devices with embedded sensors has enabled researchers and engineers to study and understand user behavior at an extremely high fidelity, particularly for use in industries such as entertainment, health, and retail. However, identified user patterns are yet to be integrated into modern systems with immersive capabilities, such as VR systems, which still remain constrained by limited application interaction models exposed to developers. In this paper, we present SmartVR, a platform that allows developers to seamlessly incorporate user behavior into VR apps. We present the high-level architecture of SmartVR, and show how it facilitates communication, data acquisition, and context recognition between smart wearable devices and mediator systems (e.g., smartphones, tablets, PCs). We demonstrate SmartVR in the context of a VR app for retail stores to show how it can be used to substitute the requirement of cumbersome input devices (e.g., mouse, keyboard) with more natural means of user-app interaction (e.g., user gestures such as swiping and tapping) to improve user experience

    Specialized CNT-based Sensor Framework for Advanced Motion Tracking

    Get PDF
    In this work, we discuss the design and development of an advanced framework for high-fidelity finger motion tracking based on Specialized Carbon Nanotube (CNT) stretchable sensors developed at our research facilities. Earlier versions of the CNT sensors have been employed in the high-fidelity finger motion tracking Data Glove commercialized by Yamaha, Japan. The framework presented in this paper encompasses our continuing research and development of more advanced CNT-based sensors and the implementation of novel high-fidelity motion tracking products based on them. The CNT sensor production and communication framework components are considered in detail and wireless motion tracking experiments with the developed hardware and software components integrated with the Yamaha Data Glove are reported

    Affective games:a multimodal classification system

    Get PDF
    Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation

    Detecting Surface Interactions via a Wearable Microphone to Improve Augmented Reality Text Entry

    Get PDF
    This thesis investigates whether we can detect and distinguish between surface interaction events such as tapping or swiping using a wearable mic from a surface. Also, what are the advantages of new text entry methods such as tapping with two fingers simultaneously to enter capital letters and punctuation? For this purpose, we conducted a remote study to collect audio and video of three different ways people might interact with a surface. We also built a CNN classifier to detect taps. Our results show that we can detect and distinguish between surface interaction events such as tap or swipe via a wearable mic on the user\u27s head
    corecore