444 research outputs found

    Pointing Devices for Wearable Computers

    Get PDF
    We present a survey of pointing devices for wearable computers, which are body-mounted devices that users can access at any time. Since traditional pointing devices (i.e., mouse, touchpad, and trackpoint) were designed to be used on a steady and flat surface, they are inappropriate for wearable computers. Just as the advent of laptops resulted in the development of the touchpad and trackpoint, the emergence of wearable computers is leading to the development of pointing devices designed for them. However, unlike laptops, since wearable computers are operated from different body positions under different environmental conditions for different uses, researchers have developed a variety of innovative pointing devices for wearable computers characterized by their sensing mechanism, control mechanism, and form factor. We survey a representative set of pointing devices for wearable computers using an “adaptation of traditional devices” versus “new devices” dichotomy and study devices according to their control and sensing mechanisms and form factor. The objective of this paper is to showcase a variety of pointing devices developed for wearable computers and bring structure to the design space for wearable pointing devices. We conclude that a de facto pointing device for wearable computers, unlike laptops, is not likely to emerge

    An analysis of interaction in the context of wearable computers

    Get PDF
    The focus of this thesis is on the evaluation of input modalities for generic input tasks, such inputting text and pointer based interaction. In particular, input systems that can be used within a wearable computing system are examined in terms of human-wearable computer interaction. The literature identified a lack of empirical research into the use of input devices for text input and pointing, when used as part of a wearable computing system. The research carried out within this thesis took an approach that acknowledged the movement condition of the user of a wearable system, and evaluated the wearable input devices while the participants were mobile and stationary. Each experiment was based on the user's time on task, their accuracy, and a NASA TLX assessment which provided the participant's subjective workload. The input devices assessed were 'off the shelf' systems. These were chosen as they are readily available to a wider range of users than bespoke inpu~ systems. Text based input was examined first. The text input systems evaluated were: a keyboard,; an on-screen keyboard, a handwriting recognition system, a voice 'recognition system and a wrist- keyboard (sometimes known as a wrist-worn keyboard). It was found that the most appropriate text input system to use overall, was the handwriting recognition system, (This is forther explored in the discussion of Chapters three and seven.) The text input evaluations were followed by a series of four experiments that examined pointing devices, and assessed their appropriateness as part of a wearable computing system. The devices were; an off-table mouse, a speech recognition system, a stylus and a track-pad. These were assessed in relation to the following generic pointing tasks: target acquisition, dragging and dropping, and trajectory-based interaction. Overall the stylus was found to be the most appropriate input device for use with a wearable system, when used as a pointing device. (This isforther covered in Chapters four to six.) By completing this series of experiments, evidence has been scientifically established that can support both a wearable computer designer and a wearable user's choice of input device. These choices can be made in regard to generic interface task activities such as: inputting text, target acquisition, dragging and dropping and trajectory-based interaction.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Pointing Devices for Wearable Computers

    Get PDF
    We present a survey of pointing devices for wearable computers, which are body-mounted devices that users can access at any time. Since traditional pointing devices (i.e., mouse, touchpad, and trackpoint) were designed to be used on a steady and flat surface they are inappropriate for wearable computers. Just as the advent of laptops resulted in the development of the touchpad and trackpoint, the emergence of wearable computers is leading to the development of pointing devices designed for them. However, unlike laptops, since wearable computers are operated from different body positions under different environmental conditions for different uses, researchers have developed a variety of innovative pointing devices for wearable computers characterized by their sensing mechanism, control mechanism, and form factor. We survey a representative set of pointing devices for wearable computers using an “adaptation of traditional devices” versus “new devices” dichotomy and study devices according to their control and sensing mechanisms and form factor. The objective of this paper is to showcase a variety of pointing devices developed for wearable computers and bring structure to the design space for wearable pointing devices. We conclude that a de facto pointing device for wearable computers, unlike laptops, is not likely to emerge

    The Design, Implementation, and Evaluation of a Pointing Device For a Wearable Computer

    Get PDF
    U.S. Air Force special tactics operators at times use small wearable computers (SWCs) for mission objectives. The primary pointing device of a SWC is either a touchpad or trackpoint, embedded into the chassis of the SWC. In situations where the user cannot directly interact with these pointing devices, the utility of the SWC is decreased. We developed a pointing device called the G3 that can be used for SWCs used by operators. The device utilizes gyroscopic sensors attached to the user’s index finger to move the computer cursor according to the angular velocity of his finger. We showed that, as measured by Fitts’s law, the overall performance and accuracy of the G3 was better than that of the touchpad and trackpoint. These findings suggest that the G3 can adequately be used with SWCs. Additionally, we investigated the G3\u27s utility as a control device for operating micro remotely piloted aircrafts

    Designing Intra-Hand Input for Wearable Devices

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Current trends toward the miniaturization of digital technology have enabled the development of versatile smart wearable devices. Powered by capable processors and equipped with advanced sensors, this novel device category can substantially impact application areas as diverse as education, health care, and entertainment. However, despite their increasing sophistication and potential, input techniques for wearable devices are still relatively immature and often fail to reflect key practical constraints in this design space. For example, on-device touch surfaces, such as the temple touchpad of Google Glass, are typically small and out-of-sight, thus limiting their expressivity capability. Furthermore, input techniques designed specifically for Head-Mounted Displays (HMDs), such as free-hand (e.g., Microsoft Hololens) or dedicated controller (e.g., Oculus VR) tracking, exhibit low levels of social acceptability (e.g., large-scale hand gestures are arguably unsuited for use in public settings) and are vulnerable to cause fatigue (e.g., gorilla arm) in long-term use. Such factors limit their real-world applicability. In addition to these difficulties, typical wearable use scenarios feature various situational impairments, such as encumbered use (e.g., having one hand busy), mobile use (e.g., while walking), and eyes-free use (e.g., while responding to real-world stimuli). These considerations are weakly catered for by the design of current wearable input systems. This dissertation seeks to address these problems by exploring the design space of intra-hand input, which refers to small-scale actions made within a single hand. In particular, through a hand-mounted sensing system, intra-hand input can include diverse input surfaces, such as between fingers (e.g., fingers-to-thumb and thumb-to-fingers inputs) to body surfaces (e.g., hand-to-face inputs). Here, I identify several advantages of this form of hand input, as follows. First, the hand???s high dexterity can enable comfortable, quick, accurate, and expressive inputs of various types (e.g., tap, flick, or swipe touches) at multiple locations (e.g., on each of the five fingers or other body surfaces). In addition, many viable forms of these input movements are small-scale, promising low fatigue over long-term use and basic actions that are discrete and socially acceptable. Finally, intra-hand input is inherently robust to many common situational impairments, such as use that take place in eyes-free, public, or mobile settings. Consolidating these prospective advantages, the general claim of this dissertation is that intra-hand input is an expressive and effective modality for interaction with wearable devices such as HMDs. The dissertation seeks to demonstrate that this claim holds in a range of wearable scenarios and applications, and with measures of both objective performance (e.g., time, errors, accuracy) and subjective experience (e.g., comfort or social acceptability). Specifically, in this dissertation, I verify the referred general claim by demonstrating it in three separate scenarios. I begin by exploring the design space of intra-hand input by studying the specific case of touches to a set of five touch-sensitive five nails. To this end, I first conduct an exploratory design process in which a large set of 144 input actions are generated, followed by two empirical studies on comfort and performance that refine such a large set to 29 viable inputs. The results of this work indicate that nail touches are an accessible, expressive, and comfortable form of input. Based on these results, in the second scenario, I focused on text entry in a mobile setting with the same nail form-factor system. Through a comparative empirical study involving both sitting and mobile conditions, nail-based touches were confirmed to be robust to physical disturbance while mobile. A follow-up word repetition study indicated that text entry studies of up to 33.1 WPM could be achieved when key layouts were appropriately optimized for the nail form factor. These results reveal that intra-hand inputs are suitable for complex input tasks in mobile contexts. In the third scenario, I explored an alternative form of intra-hand input that relies on small-scale hand touches to the face via the lens of social acceptability. This scenario is especially valuable for multi-wearables usage contexts, as single hand-mounted systems can enable input from a proximate distance for each scattered device around the body (e.g., hand-to-face input for smartglass or ear-worn device and inter-finger input with wristwatch usage posture for smartwatch). In fact, making an input on the face can attract unwanted, undue attention from the public. Thus, the design stage of this work involved elicitation of diverse unobtrusive and socially acceptable hand-to-face actions from users, that is, outcomes that were then refined into five design strategies that can achieve socially acceptable input in this setting. Follow-up studies on a prototype that instantiates these strategies validate their effectiveness and provide a characterization of the speed and accuracy achieved by the user with each system. I argue that this spectrum of metrics, recorded over a diverse set of scenarios, supports the general claim that intra-hand inputs for wearable devices can be expressively and effectively operated in terms of objective performance (e.g., time, errors, accuracy) and subjective experience (e.g., comfort or social acceptability) in common wearable use scenarios, such as when mobile and in public. I conclude with a discussion of the contributions of this work, scope for further developments, and the design issues that need to be considered by researchers, designers, and developers who seek to implement these types of input. This discussion spans diverse considerations, such as suitable tracking technologies, appropriate body regions, viable input types, and effective design processes. Through this discussion, this dissertation seeks to provide practical guidance to support and accelerate further research efforts aimed at achieving real-world systems that realize the potential of intra-hand input for wearables.clos

    Personalized Interaction with High-Resolution Wall Displays

    Get PDF
    Fallende Hardwarepreise sowie eine zunehmende Offenheit gegenĂŒber neuartigen InteraktionsmodalitĂ€ten haben in den vergangen Jahren den Einsatz von wandgroßen interaktiven Displays möglich gemacht, und in der Folge ist ihre Anwendung, unter anderem in den Bereichen Visualisierung, Bildung, und der UnterstĂŒtzung von Meetings, erfolgreich demonstriert worden. Aufgrund ihrer GrĂ¶ĂŸe sind Wanddisplays fĂŒr die Interaktion mit mehreren Benutzern prĂ€destiniert. Gleichzeitig kann angenommen werden, dass Zugang zu persönlichen Daten und Einstellungen — mithin personalisierte Interaktion — weiterhin essentieller Bestandteil der meisten AnwendungsfĂ€lle sein wird. Aktuelle Benutzerschnittstellen im Desktop- und Mobilbereich steuern Zugriffe ĂŒber ein initiales Login. Die Annahme, dass es nur einen Benutzer pro Bildschirm gibt, zieht sich durch das gesamte System, und ermöglicht unter anderem den Zugriff auf persönliche Daten und Kommunikation sowie persönliche Einstellungen. Gibt es hingegen mehrere Benutzer an einem großen Bildschirm, mĂŒssen hierfĂŒr Alternativen gefunden werden. Die daraus folgende Forschungsfrage dieser Dissertation lautet: Wie können wir im Kontext von Mehrbenutzerinteraktion mit wandgroßen Displays personalisierte Schnittstellen zur VerfĂŒgung stellen? Die Dissertation befasst sich sowohl mit personalisierter Interaktion in der NĂ€he (mit Touch als EingabemodalitĂ€t) als auch in etwas weiterer Entfernung (unter Nutzung zusĂ€tzlicher mobiler GerĂ€te). Grundlage fĂŒr personalisierte Mehrbenutzerinteraktion sind technische Lösungen fĂŒr die Zuordnung von Benutzern zu einzelnen Interaktionen. Hierzu werden zwei Alternativen untersucht: In der ersten werden Nutzer via Kamera verfolgt, und in der zweiten werden MobilgerĂ€te anhand von Ultraschallsignalen geortet. Darauf aufbauend werden Interaktionstechniken vorgestellt, die personalisierte Interaktion unterstĂŒtzen. Diese nutzen zusĂ€tzliche MobilgerĂ€te, die den Zugriff auf persönliche Daten sowie Interaktion in einigem Abstand von der Displaywand ermöglichen. Einen weiteren Teil der Arbeit bildet die Untersuchung der praktischen Auswirkungen der Ausgabe- und InteraktionsmodalitĂ€ten fĂŒr personalisierte Interaktion. Hierzu wird eine qualitative Studie vorgestellt, die Nutzerverhalten anhand des kooperativen Mehrbenutzerspiels Miners analysiert. Der abschließende Beitrag beschĂ€ftigt sich mit dem Analyseprozess selber: Es wird das Analysetoolkit fĂŒr Wandinteraktionen GIAnT vorgestellt, das Nutzerbewegungen, Interaktionen, und Blickrichtungen visualisiert und dadurch die Untersuchung der Interaktionen stark vereinfacht.An increasing openness for more diverse interaction modalities as well as falling hardware prices have made very large interactive vertical displays more feasible, and consequently, applications in settings such as visualization, education, and meeting support have been demonstrated successfully. Their size makes wall displays inherently usable for multi-user interaction. At the same time, we can assume that access to personal data and settings, and thus personalized interaction, will still be essential in most use-cases. In most current desktop and mobile user interfaces, access is regulated via an initial login and the complete user interface is then personalized to this user: Access to personal data, configurations and communications all assume a single user per screen. In the case of multiple people using one screen, this is not a feasible solution and we must find alternatives. Therefore, this thesis addresses the research question: How can we provide personalized interfaces in the context of multi-user interaction with wall displays? The scope spans personalized interaction both close to the wall (using touch as input modality) and further away (using mobile devices). Technical solutions that identify users at each interaction can replace logins and enable personalized interaction for multiple users at once. This thesis explores two alternative means of user identification: Tracking using RGB+depth-based cameras and leveraging ultrasound positioning of the users' mobile devices. Building on this, techniques that support personalized interaction using personal mobile devices are proposed. In the first contribution on interaction, HyDAP, we examine pointing from the perspective of moving users, and in the second, SleeD, we propose using an arm-worn device to facilitate access to private data and personalized interface elements. Additionally, the work contributes insights on practical implications of personalized interaction at wall displays: We present a qualitative study that analyses interaction using a multi-user cooperative game as application case, finding awareness and occlusion issues. The final contribution is a corresponding analysis toolkit that visualizes users' movements, touch interactions and gaze points when interacting with wall displays and thus allows fine-grained investigation of the interactions

    Enabling mobile microinteractions

    Get PDF
    While much attention has been paid to the usability of desktop computers, mobile com- puters are quickly becoming the dominant platform. Because mobile computers may be used in nearly any situation--including while the user is actually in motion, or performing other tasks--interfaces designed for stationary use may be inappropriate, and alternative interfaces should be considered. In this dissertation I consider the idea of microinteractions--interactions with a device that take less than four seconds to initiate and complete. Microinteractions are desirable because they may minimize interruption; that is, they allow for a tiny burst of interaction with a device so that the user can quickly return to the task at hand. My research concentrates on methods for applying microinteractions through wrist- based interaction. I consider two modalities for this interaction: touchscreens and motion- based gestures. In the case of touchscreens, I consider the interface implications of making touchscreen watches usable with the finger, instead of the usual stylus, and investigate users' performance with a round touchscreen. For gesture-based interaction, I present a tool, MAGIC, for designing gesture-based interactive system, and detail the evaluation of the tool.Ph.D.Committee Chair: Starner, Thad; Committee Member: Abowd, Gregory; Committee Member: Isbell, Charles; Committee Member: Landay, james; Committee Member: McIntyre, Blai
    • 

    corecore