166 research outputs found

    TapGazer:Text Entry with finger tapping and gaze-directed word selection

    Get PDF

    Electrotactile feedback applications for hand and arm interactions: A systematic review, meta-analysis, and future directions

    Get PDF
    Haptic feedback is critical in a broad range of human-machine/computer-interaction applications. However, the high cost and low portability/wearability of haptic devices remain unresolved issues, severely limiting the adoption of this otherwise promising technology. Electrotactile interfaces have the advantage of being more portable and wearable due to their reduced actuators' size, as well as their lower power consumption and manufacturing cost. The applications of electrotactile feedback have been explored in human-computer interaction and human-machine-interaction for facilitating hand-based interactions in applications such as prosthetics, virtual reality, robotic teleoperation, surface haptics, portable devices, and rehabilitation. This paper presents a technological overview of electrotactile feedback, as well a systematic review and meta-analysis of its applications for hand-based interactions. We discuss the different electrotactile systems according to the type of application. We also discuss over a quantitative congregation of the findings, to offer a high-level overview into the state-of-art and suggest future directions. Electrotactile feedback systems showed increased portability/wearability, and they were successful in rendering and/or augmenting most tactile sensations, eliciting perceptual processes, and improving performance in many scenarios. However, knowledge gaps (e.g., embodiment), technical (e.g., recurrent calibration, electrodes' durability) and methodological (e.g., sample size) drawbacks were detected, which should be addressed in future studies.Comment: 18 pages, 1 table, 8 figures, under review in Transactions on Haptics. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.Upon acceptance of the article by IEEE, the preprint article will be replaced with the accepted versio

    Note Taking in VR: The Forearm Keyboard

    Get PDF
    This work presents and evaluates a forearm keyboard that allows users to enter textual data using a natural full-handed typing mechanism for virtual reality head-mounted display environments. Should the issues noted with the keyboard during the study be solved, the keyboard would compare favourably with others seen in the literature

    Virtual Reality Applications and Development

    Get PDF
    Virtual Reality (VR) has existed for many years; however, it has only recently gained wide spread popularity and commercial use. This change comes from the innovations in head mounted displays (HMDs) and from the work of many software engineers making quality user experiences (UX). In this thesis, four areas are explored inside of VR. One area of research is within the use of VR for virtual environments and fire simulations. The second area of research is within the use of VR for eye tracking and medical simulations. The third area of research is within multiplayer development for more immersive collaborative simulations. Finally, the fourth area of research is within the development of typing in 3D for virtual reality. Extending from this final area of research, this thesis details an application that details more practical and granular details about developing for VR and using the real-time development platform, Unity

    TouchEditor: Interaction design and evaluation of a flexible touchpad for text editing of head-mounted displays in speech-unfriendly environments

    Get PDF
    A text editing solution that adapts to speech-unfriendly (inconvenient to speak or difficult to recognize speech) environments is essential for head-mounted displays (HMDs) to work universally. For existing schemes, e.g., touch bar, virtual keyboard and physical keyboard, there are shortcomings such as insufficient speed, uncomfortable experience or restrictions on user location and posture. To mitigate these restrictions, we propose TouchEditor, a novel text editing system for HMDs based on a flexible piezoresistive film sensor, supporting cursor positioning, text selection, text retyping and editing commands (i.e., Copy, Paste, Delete, etc.). Through literature overview and heuristic study, we design a pressure-controlled menu and a shortcut gesture set for entering editing commands, and propose an area-and-pressure-based method for cursor positioning and text selection that skillfully maps gestures in different areas and with different strengths to cursor movements with different directions and granularities. The evaluation results show that TouchEditor i) adapts to various contents and scenes well with a stable correction speed of 0.075 corrections per second; ii) achieves 95.4% gesture recognition accuracy; iii) reaches a considerable level with a mobile phone in text selection tasks. The comparison results with the speech-dependent EYEditor and the built-in touch bar further prove the flexibility and robustness of TouchEditor in speech-unfriendly environments

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    Understanding Mode and Modality Transfer in Unistroke Gesture Input

    Get PDF
    Unistroke gestures are an attractive input method with an extensive research history, but one challenge with their usage is that the gestures are not always self-revealing. To obtain expertise with these gestures, interaction designers often deploy a guided novice mode -- where users can rely on recognizing visual UI elements to perform a gestural command. Once a user knows the gesture and associated command, they can perform it without guidance; thus, relying on recall. The primary aim of my thesis is to obtain a comprehensive understanding of why, when, and how users transfer from guided modes or modalities to potentially more efficient, or novel, methods of interaction -- through symbolic-abstract unistroke gestures. The goal of my work is to not only study user behaviour from novice to more efficient interaction mechanisms, but also to expand upon the concept of intermodal transfer to different contexts. We garner this understanding by empirically evaluating three different use cases of mode and/or modality transitions. Leveraging marking menus, the first piece investigates whether or not designers should force expertise transfer by penalizing use of the guided mode, in an effort to encourage use of the recall mode. Second, we investigate how well users can transfer skills between modalities, particularly when it is impractical to present guidance in the target or recall modality. Lastly, we assess how well users' pre-existing spatial knowledge of an input method (the QWERTY keyboard layout), transfers to performance in a new modality. Applying lessons from these three assessments, we segment intermodal transfer into three possible characterizations -- beyond the traditional novice to expert contextualization. This is followed by a series of implications and potential areas of future exploration spawning from our work

    Designing Intra-Hand Input for Wearable Devices

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Current trends toward the miniaturization of digital technology have enabled the development of versatile smart wearable devices. Powered by capable processors and equipped with advanced sensors, this novel device category can substantially impact application areas as diverse as education, health care, and entertainment. However, despite their increasing sophistication and potential, input techniques for wearable devices are still relatively immature and often fail to reflect key practical constraints in this design space. For example, on-device touch surfaces, such as the temple touchpad of Google Glass, are typically small and out-of-sight, thus limiting their expressivity capability. Furthermore, input techniques designed specifically for Head-Mounted Displays (HMDs), such as free-hand (e.g., Microsoft Hololens) or dedicated controller (e.g., Oculus VR) tracking, exhibit low levels of social acceptability (e.g., large-scale hand gestures are arguably unsuited for use in public settings) and are vulnerable to cause fatigue (e.g., gorilla arm) in long-term use. Such factors limit their real-world applicability. In addition to these difficulties, typical wearable use scenarios feature various situational impairments, such as encumbered use (e.g., having one hand busy), mobile use (e.g., while walking), and eyes-free use (e.g., while responding to real-world stimuli). These considerations are weakly catered for by the design of current wearable input systems. This dissertation seeks to address these problems by exploring the design space of intra-hand input, which refers to small-scale actions made within a single hand. In particular, through a hand-mounted sensing system, intra-hand input can include diverse input surfaces, such as between fingers (e.g., fingers-to-thumb and thumb-to-fingers inputs) to body surfaces (e.g., hand-to-face inputs). Here, I identify several advantages of this form of hand input, as follows. First, the hand???s high dexterity can enable comfortable, quick, accurate, and expressive inputs of various types (e.g., tap, flick, or swipe touches) at multiple locations (e.g., on each of the five fingers or other body surfaces). In addition, many viable forms of these input movements are small-scale, promising low fatigue over long-term use and basic actions that are discrete and socially acceptable. Finally, intra-hand input is inherently robust to many common situational impairments, such as use that take place in eyes-free, public, or mobile settings. Consolidating these prospective advantages, the general claim of this dissertation is that intra-hand input is an expressive and effective modality for interaction with wearable devices such as HMDs. The dissertation seeks to demonstrate that this claim holds in a range of wearable scenarios and applications, and with measures of both objective performance (e.g., time, errors, accuracy) and subjective experience (e.g., comfort or social acceptability). Specifically, in this dissertation, I verify the referred general claim by demonstrating it in three separate scenarios. I begin by exploring the design space of intra-hand input by studying the specific case of touches to a set of five touch-sensitive five nails. To this end, I first conduct an exploratory design process in which a large set of 144 input actions are generated, followed by two empirical studies on comfort and performance that refine such a large set to 29 viable inputs. The results of this work indicate that nail touches are an accessible, expressive, and comfortable form of input. Based on these results, in the second scenario, I focused on text entry in a mobile setting with the same nail form-factor system. Through a comparative empirical study involving both sitting and mobile conditions, nail-based touches were confirmed to be robust to physical disturbance while mobile. A follow-up word repetition study indicated that text entry studies of up to 33.1 WPM could be achieved when key layouts were appropriately optimized for the nail form factor. These results reveal that intra-hand inputs are suitable for complex input tasks in mobile contexts. In the third scenario, I explored an alternative form of intra-hand input that relies on small-scale hand touches to the face via the lens of social acceptability. This scenario is especially valuable for multi-wearables usage contexts, as single hand-mounted systems can enable input from a proximate distance for each scattered device around the body (e.g., hand-to-face input for smartglass or ear-worn device and inter-finger input with wristwatch usage posture for smartwatch). In fact, making an input on the face can attract unwanted, undue attention from the public. Thus, the design stage of this work involved elicitation of diverse unobtrusive and socially acceptable hand-to-face actions from users, that is, outcomes that were then refined into five design strategies that can achieve socially acceptable input in this setting. Follow-up studies on a prototype that instantiates these strategies validate their effectiveness and provide a characterization of the speed and accuracy achieved by the user with each system. I argue that this spectrum of metrics, recorded over a diverse set of scenarios, supports the general claim that intra-hand inputs for wearable devices can be expressively and effectively operated in terms of objective performance (e.g., time, errors, accuracy) and subjective experience (e.g., comfort or social acceptability) in common wearable use scenarios, such as when mobile and in public. I conclude with a discussion of the contributions of this work, scope for further developments, and the design issues that need to be considered by researchers, designers, and developers who seek to implement these types of input. This discussion spans diverse considerations, such as suitable tracking technologies, appropriate body regions, viable input types, and effective design processes. Through this discussion, this dissertation seeks to provide practical guidance to support and accelerate further research efforts aimed at achieving real-world systems that realize the potential of intra-hand input for wearables.clos

    Chinese Text Entry with Mobile Devices

    Get PDF
    Tietokoneiden ja nykyaikaisten matkapuhelimien käytön kannalta on olennaista, että niihin voidaan syöttää tekstiä tehokkaasti. Kiinan kielen eri murteita puhuu äidinkielenään noin viidesosa maailman väestöstä eli yli miljardi ihmistä. Kiinan kielen merkki- ja tavuperustaisuus tekee siitä tekstinsyötön kannalta ainutlaatuisen haastavan. Monet kiinalaisista merkeistä ovat rakenteeltaan monimutkaisia ja homofonisia (ääntyvät samalla tavoin) joidenkin muiden merkkien kanssa. Syötettäessä tekstiä näppäimistöltä tavallinen tapa on käyttää ns. pinyin-koodeja, joiden avulla kukin kiinan merkki voidaan esittää useasta latinalaisen aakkoston merkistä koostuvana koodina. Homofoniasta johtuen tarkoitettu kiinan kielen merkki joudutaan tämän jälkeen vielä valitsemaan usean vaihtoehdon joukosta, mikä tekee tekstinsyöttöprosessista vaikeampaa kuin romaanisten kielten tapauksessa. Lisäksi on otettava huomioon Kiinan eri osissa puhutut useat murteet. Kaikki nämä tekijät yhdessä tekevät kiinankielisen tekstin syötöstä tietokoneille haastavaa. Tämän väitöskirjan tavoitteena on parantaa kiinankielisen tekstin syöttötapojen käyttäjäkokemusta käytettäessä matkapuhelimia ja muita mobiililaitteita. Väitöskirjassa tutkitaan empiiristen kokeiden ja mallinnuksen avulla uusia tekstinsyöttötapoja ja niiden käyttöä. Tutkimuksen kohteena on neljä erilaista tekstinsyöttötapaa: kiinankielen käsinkirjoituksen tunnistus, pyörivän kiekon avulla tapahtuva tekstinsyöttö, mandariinikiinaan perustuva sanelu, ja numeronäppäinten avulla tapahtuva pinyin-koodien syöttö. Työssä ehdotetaan uusia tekniikoita sekä käsinkirjoituksen tunnistukseen että kiekkoa käyttävään pinyin-koodien syöttöön. Empiirisissä kokeissa osoittautui että käyttäjät pitivät uusista tekniikoista. Mandariinikiinalle on suunniteltu lyhytviestien sanelusovellus, josta on tehty kaksi käyttäjäkoetta. Myös numeronäppäinten avulla tapahtuvaa pinyin-koodien syöttöä on tutkittu kahdessa kokeessa. Ensimmäisessä kokeessa vertailtiin viittä eri menetelmää. Se tuotti suunnitteluohjeita etenkin koskien fraasien (useamman merkin kokonaisuuksien) syöttöä, tekniikkaa joka voi nopeuttaa tekstinsyöttöä. Toisen osatutkimuksen tuloksena on tekstinsyöttöä kuvaava malli, jonka avulla voidaan ennustaa menetelmän nopeutta kun syötettäessä ei tehdä virheitä. Tutkimus johti myös useisiin jatkotutkimuskysymyksiin. On tarpeen kehittää tehokkaampia menetelmiä tilanteeseen, jossa merkki joudutaan valitsemaan useista vaihtoehdoista. Kehityspotentiaalia on myös merkkien perustana olevien viivojen tunnistustavoissa sekä kosketusnäytöllä esitettyjen näppäimistöjen paremmassa hyödyntämisessä.For using computers and modern mobile phones it is essential that there are efficient methods for providing textual input. About one fifth of the world´s population, or over one billion people, speaks some variety of Chinese as their native language. Chinese has unique characteristics as a logosyllabic language. For example, many Chinese characters are complex in structure and normally homophonic with some others. With keyboards and other key-based input devices the normal approach is to use so-called pinyin input, where the Chinese characters are entered using their pinyin mark that consists of several characters in the Roman alphabet. Because of homophony this technique requires choosing the correct Chinese character from a list of posssible choices, making the input process more complicated than in Roman languages. Moreover, the many varieties of the language in different parts of China have to be taken into account as well. All above factors bring new challenges to the design and evaluation of Chinese text entry methods in computing systems. The overall objective of this dissertation is to improve user experience of Chinese text entry on mobile devices. To achieve the goal, the author explores new interaction solutions and patterns of user behavior in the Chinese text entry process with various approaches including empirical studies and performance modeling. The work covers four means of Chinese text entry on mobile devices: Chinese handwriting recognition, Chinese indirect text entry with a rotator, Mandarin dictation, and Chinese pinyin input methods with a 12-key keypad. New design solutions for Chinese handwriting recognition and pinyin methods utilizing a rotator are proposed and proved being well accepted by users with empirical studies. A Mandarin short message dictation application for mobile phones is also presented , with two associated studies on human factors. Two studies were also carried out on Chinese pinyin input methods that are based on the 12-key keypad. The comparative study of five phrasal pinyin input methods led to design guidelines for the advanced feature of phrasal input. The second study of pinyin input methods produced a predictive model addressing users´ error-free speeds. Based on the conclusions from studies in this thesis, several additional research questions were identified for the future. For example, improvements are necessary to promote user performance on target selection process in Chinese text entry on mobile devices. Moreover, design and studies on stroke methods and Chinese specific soft keyboards are also required

    HCI models, theories, and frameworks: Toward a multidisciplinary science

    Get PDF
    Motivation The movement of body and limbs is inescapable in human-computer interaction (HCI). Whether browsing the web or intensively entering and editing text in a document, our arms, wrists, and fingers are at work on the keyboard, mouse, and desktop. Our head, neck, and eyes move about attending to feedback marking our progress. This chapter is motivated by the need to match the movement limits, capabilities, and potential of humans with input devices and interaction techniques on computing systems. Our focus is on models of human movement relevant to human-computer interaction. Some of the models discussed emerged from basic research in experimental psychology, whereas others emerged from, and were motivated by, the specific need in HCI to model the interaction between users and physical devices, such as mice and keyboards. As much as we focus on specific models of human movement and user interaction with devices, this chapter is also about models in general. We will say a lot about the nature of models, what they are, and why they are important tools for the research and development of humancomputer interfaces. Overview: Models and Modeling By its very nature, a model is a simplification of reality. However a model is useful only if it helps in designing, evaluating, or otherwise providing a basis for understanding the behaviour of a complex artifact such as a computer system. It is convenient to think of models as lying in a continuum, with analogy and metaphor at one end and mathematical equations at the other. Most models lie somewhere in-between. Toward the metaphoric end are descriptive models; toward the mathematical end are predictive models. These two categories are our particular focus in this chapter, and we shall visit a few examples of each. Two models will be presented in detail and in case studies: Fitts' model of the information processing capability of the human motor system and Guiard's model of bimanual control. Fitts' model is a mathematical expression emerging from the rigors of probability theory. It is a predictive model at the mathematical end of the continuum, to be sure, yet when applied as a model of human movement it has characteristics of a metaphor. Guiard's model emerged from a detailed analysis of how human's use their hands in everyday tasks, such as writing, drawing, playing a sport, or manipulating objects. It is a descriptive model, lacking in mathematical rigor but rich in expressive power
    corecore