206 research outputs found

    A Human−Computer Interface Replacing Mouse and Keyboard for Individuals with Limited Upper Limb Mobility

    Get PDF
    People with physical disabilities in their upper extremities face serious issues in using classical input devices due to lacking movement possibilities and precision. This article suggests an alternative input concept and presents corresponding input devices. The proposed interface combines an inertial measurement unit and force sensing resistors, which can replace mouse and keyboard. Head motions are mapped to mouse pointer positions, while mouse button actions are triggered by contracting mastication muscles. The contact pressures of each fingertip are acquired to replace the conventional keyboard. To allow for complex text entry, the sensory concept is complemented by an ambiguous keyboard layout with ten keys. The related word prediction function provides disambiguation at word level. Haptic feedback is provided to users corresponding to their virtual keystrokes for enhanced closed-loop interactions. This alternative input system enables text input as well as the emulation of a two-button mouse

    Seamless Authentication for Ubiquitous Devices

    Get PDF
    User authentication is an integral part of our lives; we authenticate ourselves to personal computers and a variety of other things several times a day. Authentication is burdensome. When we wish to access to a computer or a resource, it is an additional task that we need to perform~-- an interruption in our workflow. In this dissertation, we study people\u27s authentication behavior and attempt to make authentication to desktops and smartphones less burdensome for users. First, we present the findings of a user study we conducted to understand people\u27s authentication behavior: things they authenticate to, how and when they authenticate, authentication errors they encounter and why, and their opinions about authentication. In our study, participants performed about 39 authentications per day on average; the majority of these authentications were to personal computers (desktop, laptop, smartphone, tablet) and with passwords, but the number of authentications to other things (e.g., car, door) was not insignificant. We saw a high failure rate for desktop and laptop authentication among our participants, affirming the need for a more usable authentication method. Overall, we found that authentication was a noticeable part of all our participants\u27 lives and burdensome for many participants, but they accepted it as cost of security, devising their own ways to cope with it. Second, we propose a new approach to authentication, called bilateral authentication, that leverages wrist-wearable technology to enable seamless authentication for things that people use with their hands, while wearing a smart wristband. In bilateral authentication two entities (e.g., user\u27s wristband and the user\u27s phone) share their knowledge (e.g., about user\u27s interaction with the phone) to verify the user\u27s identity. Using this approach, we developed a seamless authentication method for desktops and smartphones. Our authentication method offers quick and effortless authentication, continuous user verification while the desktop (or smartphone) is in use, and automatic deauthentication after use. We evaluated our authentication method through four in-lab user studies, evaluating the method\u27s usability and security from the system and the user\u27s perspective. Based on the evaluation, our authentication method shows promise for reducing users\u27 authentication burden for desktops and smartphones

    SAW: Wristband-Based Authentication for Desktop Computers

    Get PDF
    Token-based proximity authentication methods that authenticate users based on physical proximity are effortless, but lack explicit user intentionality, which may result in accidental logins. For example, a user may get logged in when she is near a computer or just passing by, even if she does not intend to use that computer. Lack of user intentionality in proximity-based methods makes them less suitable for multi-user shared computer environments, despite their desired usability benefits over passwords. \par We present an authentication method for desktops called Seamless Authentication using Wristbands (SAW), which addresses the lack of intentionality limitation of proximity-based methods. SAW uses a low-effort user input step for explicitly conveying user intentionality, while keeping the overall usability of the method better than password-based methods. In SAW, a user wears a wristband that acts as the user\u27s identity token, and to authenticate to a desktop, the user provides a low-effort input by tapping a key on the keyboard multiple times or wiggling the mouse with the wristband hand. This input to the desktop conveys that someone wishes to log in to the desktop, and SAW verifies the user who wishes to log in by confirming the user\u27s proximity and correlating the received keyboard or mouse inputs with the user\u27s wrist movement, as measured by the wristband. In our feasibility user study (n=17), SAW proved quick to authenticate (within two seconds), with a low false-negative rate of 2.5% and worst-case false-positive rate of 1.8%. In our user perception study (n=16), a majority of the participants rated it as more usable than passwords

    BimodalGaze:Seamlessly Refined Pointing with Gaze and Filtered Gestural Head Movement

    Get PDF
    Eye gaze is a fast and ergonomic modality for pointing but limited in precision and accuracy. In this work, we introduce BimodalGaze, a novel technique for seamless head-based refinement of a gaze cursor. The technique leverages eye-head coordination insights to separate natural from gestural head movement. This allows users to quickly shift their gaze to targets over larger fields of view with naturally combined eye-head movement, and to refine the cursor position with gestural head movement. In contrast to an existing baseline, head refinement is invoked automatically, and only if a target is not already acquired by the initial gaze shift. Study results show that users reliably achieve fine-grained target selection, but we observed a higher rate of initial selection errors affecting overall performance. An in-depth analysis of user performance provides insight into the classification of natural versus gestural head movement, for improvement of BimodalGaze and other potential applications

    ZEBRA: Zero-Effort Bilateral Recurring Authentication

    Get PDF
    Common authentication methods based on passwords, tokens, or fingerprints perform one-time authentication and rely on users to log out from the computer terminal when they leave. Users often do not log out, however, which is a security risk. The most common solution, inactivity timeouts, inevitably fail security (too long a timeout) or usability (too short a timeout) goals. One solution is to authenticate users continuously while they are using the terminal and automatically log them out when they leave. Several solutions are based on user proximity, but these are not sufficient: they only confirm whether the user is nearby but not whether the user is actually using the terminal. Proposed solutions based on behavioral biometric authentication (e.g., keystroke dynamics) may not be reliable, as a recent study suggests. \par To address this problem we propose ZEBRA. In ZEBRA, a user wears a bracelet (with a built-in accelerometer, gyroscope, and radio) on her dominant wrist. When the user interacts with a computer terminal, the bracelet records the wrist movement, processes it, and sends it to the terminal. The terminal compares the wrist movement with the inputs it receives from the user (via keyboard and mouse), and confirms the continued presence of the user only if they correlate. Because the bracelet is on the same hand that provides inputs to the terminal, the accelerometer and gyroscope data and input events received by the terminal should correlate because their source is the same – the user\u27s hand movement. In our experiments ZEBRA performed continuous authentication with 85% accuracy in verifying the correct user and identified all adversaries within 11 s. For a different threshold that trades security for usability, ZEBRA correctly verified 90% of users and identified all adversaries within 50 s

    EYECOM: an innovative approach for computer interaction

    Get PDF
    The world is innovating rapidly, and there is a need for continuous interaction with the technology. Sadly, there do not exist promising options for paralyzed people to interact with the machines i.e., laptops, smartphones, and tabs. A few commercial solutions such as Google Glasses are costly and cannot be afforded by every paralyzed person for such interaction. Towards this end, the thesis proposes a retina-controlled device called EYECOM. The proposed device is constructed from off-the-shelf cost-effective yet robust IoT devices (i.e., Arduino microcontrollers, Xbee wireless sensors, IR diodes, and accelerometer). The device can easily be mounted on to the glasses; the paralyzed person using this device can interact with the machine using simple head movement and eye blinks. The IR detector is located in front of the eye to illuminate the eye region. As a result of illumination, the eye reflects IR light which includes electrical signals and as the eyelids close, the reflected light over eye surface is disrupted, and such change in reflected value is recorded. Further to enable cursor movement onto the computer screen for the paralyzed person a device named accelerometer is used. The accelerometer is a small device, with the size of phalanges, a human thumb bone. The device operates on the principle of axis-based motion sensing and it can be worn as a ring by a paralyzed person. A microcontroller processes the inputs from the IR sensors, accelerometer and transmits them wirelessly via Xbee wireless sensor (i.e., a radio) to another microcontroller attached to the computer. With the help of a proposed algorithm, the microcontroller attached to the computer, on receiving the signals moves cursor onto the computer screen and facilitate performing actions, as simple as opening a document to operating a word-to-speech software. EYECOM has features which can help paralyzed persons to continue their contributions towards the technological world and become an active part of the society. Resultantly, they will be able to perform number of tasks without depending upon others from as simple as reading a newspaper on the computer to activate word-to-voice software

    Integrating Usability Models into Pervasive Application Development

    Get PDF
    This thesis describes novel processes in two important areas of human-computer interaction (HCI) and demonstrates ways to combine these in appropriate ways. First, prototyping plays an essential role in the development of complex applications. This is especially true if a user-centred design process is followed. We describe and compare a set of existing toolkits and frameworks that support the development of prototypes in the area of pervasive computing. Based on these observations, we introduce the EIToolkit that allows the quick generation of mobile and pervasive applications, and approaches many issues found in previous works. Its application and use is demonstrated in several projects that base on the architecture and an implementation of the toolkit. Second, we present novel results and extensions in user modelling, specifically for predicting time to completion of tasks. We extended established concepts such as the Keystroke-Level Model to novel types of interaction with mobile devices, e.g. using optical markers and gestures. The design, creation, as well as a validation of this model are presented in some detail in order to show its use and usefulness for making usability predictions. The third part is concerned with the combination of both concepts, i.e. how to integrate user models into the design process of pervasive applications. We first examine current ways of developing and show generic approaches to this problem. This leads to a concrete implementation of such a solution. An innovative integrated development environment is provided that allows for quickly developing mobile applications, supports the automatic generation of user models, and helps in applying these models early in the design process. This can considerably ease the process of model creation and can replace some types of costly user studies.Diese Dissertation beschreibt neuartige Verfahren in zwei wichtigen Bereichen der Mensch-Maschine-Kommunikation und erlĂ€utert Wege, diese geeignet zu verknĂŒpfen. Zum einen spielt die Entwicklung von Prototypen insbesondere bei der Verwendung von benutzerzentrierten Entwicklungsverfahren eine besondere Rolle. Es werden daher auf der einen Seite eine ganze Reihe vorhandener Arbeiten vorgestellt und verglichen, die die Entwicklung prototypischer Anwendungen speziell im Bereich des Pervasive Computing unterstĂŒtzen. Ein eigener Satz an Werkzeugen und Komponenten wird prĂ€sentiert, der viele der herausgearbeiteten Nachteile und Probleme solcher existierender Projekte aufgreift und entsprechende Lösungen anbietet. Mehrere Beispiele und eigene Arbeiten werden beschrieben, die auf dieser Architektur basieren und entwickelt wurden. Auf der anderen Seite werden neue Forschungsergebnisse prĂ€sentiert, die Erweiterungen von Methoden in der Benutzermodellierung speziell im Bereich der AbschĂ€tzung von Interaktionszeiten beinhalten. Mit diesen in der Dissertation entwickelten Erweiterungen können etablierte Konzepte wie das Keystroke-Level Model auf aktuelle und neuartige Interaktionsmöglichkeiten mit mobilen GerĂ€ten angewandt werden. Der Entwurf, das Erstellen sowie eine Validierung der Ergebnisse dieser Erweiterungen werden detailliert dargestellt. Ein dritter Teil beschĂ€ftigt sich mit Möglichkeiten die beiden beschriebenen Konzepte, zum einen Prototypenentwicklung im Pervasive Computing und zum anderen Benutzermodellierung, geeignet zu kombinieren. Vorhandene AnsĂ€tze werden untersucht und generische Integrationsmöglichkeiten beschrieben. Dies fĂŒhrt zu konkreten Implementierungen solcher Lösungen zur Integration in vorhandene Umgebungen, als auch in Form einer eigenen Applikation spezialisiert auf die Entwicklung von Programmen fĂŒr mobile GerĂ€te. Sie erlaubt das schnelle Erstellen von Prototypen, unterstĂŒtzt das automatische Erstellen spezialisierter Benutzermodelle und ermöglicht den Einsatz dieser Modelle frĂŒh im Entwicklungsprozess. Dies erleichtert die Anwendung solcher Modelle und kann Aufwand und Kosten fĂŒr entsprechende Benutzerstudien einsparen

    An investigation into gaze-based interaction techniques for people with motor impairments

    Get PDF
    The use of eye movements to interact with computers offers opportunities for people with impaired motor ability to overcome the difficulties they often face using hand-held input devices. Computer games have become a major form of entertainment, and also provide opportunities for social interaction in multi-player environments. Games are also being used increasingly in education to motivate and engage young people. It is important that young people with motor impairments are able to benefit from, and enjoy, them. This thesis describes a program of research conducted over a 20-year period starting in the early 1990's that has investigated interaction techniques based on gaze position intended for use by people with motor impairments. The work investigates how to make standard software applications accessible by gaze, so that no particular modification to the application is needed. The work divides into 3 phases. In the first phase, ways of using gaze to interact with the graphical user interfaces of office applications were investigated, designed around the limitations of gaze interaction. Of these, overcoming the inherent inaccuracies of pointing by gaze at on-screen targets was particularly important. In the second phase, the focus shifted from office applications towards immersive games and on-line virtual worlds. Different means of using gaze position and patterns of eye movements, or gaze gestures, to issue commands were studied. Most of the testing and evaluation studies in this, like the first, used participants without motor-impairments. The third phase of the work then studied the applicability of the research findings thus far to groups of people with motor impairments, and in particular,the means of adapting the interaction techniques to individual abilities. In summary, the research has shown that collections of specialised gaze-based interaction techniques can be built as an effective means of completing the tasks in specific types of games and how these can be adapted to the differing abilities of individuals with motor impairments

    Designing an ecology of distributed agents

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1998.Includes bibliographical references (p. 87-92).by Nelson Minar.S.M

    Wearable computing and contextual awareness

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1999.Includes bibliographical references (leaves 231-248).Computer hardware continues to shrink in size and increase in capability. This trend has allowed the prevailing concept of a computer to evolve from the mainframe to the minicomputer to the desktop. Just as the physical hardware changes, so does the use of the technology, tending towards more interactive and personal systems. Currently, another physical change is underway, placing computational power on the user's body. These wearable machines encourage new applications that were formerly infeasible and, correspondingly, will result in new usage patterns. This thesis suggests that the fundamental improvement offered by wearable computing is an increased sense of user context. I hypothesize that on-body systems can sense the user's context with little or no assistance from environmental infrastructure. These body-centered systems that "see" as the user sees and "hear" as the user hears, provide a unique "first-person" viewpoint of the user's environment. By exploiting models recovered by these systems, interfaces are created which require minimal directed action or attention by the user. In addition, more traditional applications are augmented by the contextual information recovered by these systems. To investigate these issues, I provide perceptually sensible tools for recovering and modeling user context in a mobile, everyday environment. These tools include a downward-facing, camera-based system for establishing the location of the user; a tag-based object recognition system for augmented reality; and several on-body gesture recognition systems to identify various user tasks in constrained environments. To address the practicality of contextually-aware wearable computers, issues of power recovery, heat dissipation, and weight distribution are examined. In addition, I have encouraged a community of wearable computer users at the Media Lab through design, management, and support of hardware and software infrastructure. This unique community provides a heightened awareness of the use and social issues of wearable computing. As much as possible, the lessons from this experience will be conveyed in the thesis.by Thad Eugene Starner.Ph.D
    • 

    corecore