9 research outputs found

    An Evaluation of Touch and Pressure-Based Scrolling and Haptic Feedback for In-car Touchscreens

    Get PDF
    An in-car study was conducted to examine different input techniques for list-based scrolling tasks and the effectiveness of haptic feedback for in-car touchscreens. The use of physical switchgear on centre consoles is decreasing which allows designers to develop new ways to interact with in-car applications. However, these new methods need to be evaluated to ensure they are usable. Therefore, three input techniques were tested: direct scrolling, pressure-based scrolling and scrolling using onscreen buttons on a touchscreen. The results showed that direct scrolling was less accurate than using onscreen buttons and pressure input, but took almost half the time when compared to the onscreen buttons and was almost three times quicker than pressure input. Vibrotactile feedback did not improve input performance but was preferred by the users. Understanding the speed vs. accuracy trade-off between these input techniques will allow better decisions when designing safer in-car interfaces for scrolling applications

    Improving the Accuracy of Mobile Touchscreen QWERTY Keyboards

    Get PDF
    In this thesis we explore alternative keyboard layouts in hopes of finding one that increases the accuracy of text input on mobile touchscreen devices. In particular, we investigate if a single swap of 2 keys can significantly improve accuracy on mobile touchscreen QWERTY keyboards. We do so by carefully considering the placement of keys, exploiting a specific vulnerability that occurs within a keyboard layout, namely, that the placement of particular keys next to others may be increasing errors when typing. We simulate the act of typing on a mobile touchscreen QWERTY keyboard, beginning with modeling the typographical errors that can occur when doing so. We then construct a simple autocorrector using Bayesian methods, describing how we can autocorrect user input and evaluate the ability of the keyboard to output the correct text. Then, using our models, we provide methods of testing and define a metric, the WAR rating, which provides us a way of comparing the accuracy of a keyboard layout. After running our tests on all 325 2-key swap layouts against the original QWERTY layout, we show that there exists more than one 2-key swap that increases the accuracy of the current QWERTY layout, and that the best 2-key swap is i ↔ t, increasing accuracy by nearly 0.18 percent

    Side Pressure for Bidirectional Navigation on Small Devices

    Get PDF
    International audienceVirtual navigation on a mobile touchscreen is usually performed using finger gestures: drag and flick to scroll or pan, pinch to zoom. While easy to learn and perform, these gestures cause significant occlusion of the display. They also require users to explicitly switch between navigation mode and edit mode to either change the viewport's position in the document, or manipulate the actual content displayed in that viewport, respectively. SidePress augments mobile devices with two continuous pressure sensors co-located on one of their sides. It provides users with generic bidirectional navigation capabilities at different levels of granularity, all seamlessly integrated to act as an alternative to traditional navigation techniques, including scrollbars, drag-and-flick, or pinch-to-zoom. We describe the hardware prototype, detail the associated interaction vocabulary for different applications, and report on two laboratory studies. The first shows that users can precisely and efficiently control SidePress; the second, that SidePress can be more efficient than drag-and-flick touch gestures when scrolling large documents

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Touch Crossing-Based Selection and the Pin-and-Cross Technique

    Get PDF
    This thesis focuses on the evaluation, exploration and demonstration of crossing paradigm with touch modality. Under the scenario of crossing selection, the target is selected by stroking through a boundary 'goal' instead of pointing inside a perimeter. We present empirical evidence to validate crossing performance for touch. Inspired by the experimental results, we then develop, evaluate and demonstrate a new unimanual multi-touch interaction space called 'pin-and-cross'. It combines one or more static touches ('pins') with another touch to cross a radial target, all performed with one hand. Our work serves to provide necessary support for the exploration and evaluation of more expressive multi-touch crossing techniques

    Designing Intra-Hand Input for Wearable Devices

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Current trends toward the miniaturization of digital technology have enabled the development of versatile smart wearable devices. Powered by capable processors and equipped with advanced sensors, this novel device category can substantially impact application areas as diverse as education, health care, and entertainment. However, despite their increasing sophistication and potential, input techniques for wearable devices are still relatively immature and often fail to reflect key practical constraints in this design space. For example, on-device touch surfaces, such as the temple touchpad of Google Glass, are typically small and out-of-sight, thus limiting their expressivity capability. Furthermore, input techniques designed specifically for Head-Mounted Displays (HMDs), such as free-hand (e.g., Microsoft Hololens) or dedicated controller (e.g., Oculus VR) tracking, exhibit low levels of social acceptability (e.g., large-scale hand gestures are arguably unsuited for use in public settings) and are vulnerable to cause fatigue (e.g., gorilla arm) in long-term use. Such factors limit their real-world applicability. In addition to these difficulties, typical wearable use scenarios feature various situational impairments, such as encumbered use (e.g., having one hand busy), mobile use (e.g., while walking), and eyes-free use (e.g., while responding to real-world stimuli). These considerations are weakly catered for by the design of current wearable input systems. This dissertation seeks to address these problems by exploring the design space of intra-hand input, which refers to small-scale actions made within a single hand. In particular, through a hand-mounted sensing system, intra-hand input can include diverse input surfaces, such as between fingers (e.g., fingers-to-thumb and thumb-to-fingers inputs) to body surfaces (e.g., hand-to-face inputs). Here, I identify several advantages of this form of hand input, as follows. First, the hand???s high dexterity can enable comfortable, quick, accurate, and expressive inputs of various types (e.g., tap, flick, or swipe touches) at multiple locations (e.g., on each of the five fingers or other body surfaces). In addition, many viable forms of these input movements are small-scale, promising low fatigue over long-term use and basic actions that are discrete and socially acceptable. Finally, intra-hand input is inherently robust to many common situational impairments, such as use that take place in eyes-free, public, or mobile settings. Consolidating these prospective advantages, the general claim of this dissertation is that intra-hand input is an expressive and effective modality for interaction with wearable devices such as HMDs. The dissertation seeks to demonstrate that this claim holds in a range of wearable scenarios and applications, and with measures of both objective performance (e.g., time, errors, accuracy) and subjective experience (e.g., comfort or social acceptability). Specifically, in this dissertation, I verify the referred general claim by demonstrating it in three separate scenarios. I begin by exploring the design space of intra-hand input by studying the specific case of touches to a set of five touch-sensitive five nails. To this end, I first conduct an exploratory design process in which a large set of 144 input actions are generated, followed by two empirical studies on comfort and performance that refine such a large set to 29 viable inputs. The results of this work indicate that nail touches are an accessible, expressive, and comfortable form of input. Based on these results, in the second scenario, I focused on text entry in a mobile setting with the same nail form-factor system. Through a comparative empirical study involving both sitting and mobile conditions, nail-based touches were confirmed to be robust to physical disturbance while mobile. A follow-up word repetition study indicated that text entry studies of up to 33.1 WPM could be achieved when key layouts were appropriately optimized for the nail form factor. These results reveal that intra-hand inputs are suitable for complex input tasks in mobile contexts. In the third scenario, I explored an alternative form of intra-hand input that relies on small-scale hand touches to the face via the lens of social acceptability. This scenario is especially valuable for multi-wearables usage contexts, as single hand-mounted systems can enable input from a proximate distance for each scattered device around the body (e.g., hand-to-face input for smartglass or ear-worn device and inter-finger input with wristwatch usage posture for smartwatch). In fact, making an input on the face can attract unwanted, undue attention from the public. Thus, the design stage of this work involved elicitation of diverse unobtrusive and socially acceptable hand-to-face actions from users, that is, outcomes that were then refined into five design strategies that can achieve socially acceptable input in this setting. Follow-up studies on a prototype that instantiates these strategies validate their effectiveness and provide a characterization of the speed and accuracy achieved by the user with each system. I argue that this spectrum of metrics, recorded over a diverse set of scenarios, supports the general claim that intra-hand inputs for wearable devices can be expressively and effectively operated in terms of objective performance (e.g., time, errors, accuracy) and subjective experience (e.g., comfort or social acceptability) in common wearable use scenarios, such as when mobile and in public. I conclude with a discussion of the contributions of this work, scope for further developments, and the design issues that need to be considered by researchers, designers, and developers who seek to implement these types of input. This discussion spans diverse considerations, such as suitable tracking technologies, appropriate body regions, viable input types, and effective design processes. Through this discussion, this dissertation seeks to provide practical guidance to support and accelerate further research efforts aimed at achieving real-world systems that realize the potential of intra-hand input for wearables.clos

    Using pressure input and thermal feedback to broaden haptic interaction with mobile devices

    Get PDF
    Pressure input and thermal feedback are two under-researched aspects of touch in mobile human-computer interfaces. Pressure input could provide a wide, expressive range of continuous input for mobile devices. Thermal stimulation could provide an alternative means of conveying information non-visually. This thesis research investigated 1) how accurate pressure-based input on mobile devices could be when the user was walking and provided with only audio feedback and 2) what forms of thermal stimulation are both salient and comfortable and so could be used to design structured thermal feedback for conveying multi-dimensional information. The first experiment tested control of pressure on a mobile device when sitting and using audio feedback. Targeting accuracy was >= 85% when maintaining 4-6 levels of pressure across 3.5 Newtons, using only audio feedback and a Dwell selection technique. Two further experiments tested control of pressure-based input when walking and found accuracy was very high (>= 97%) even when walking and using only audio feedback, when using a rate-based input method. A fourth experiment tested how well each digit of one hand could apply pressure to a mobile phone individually and in combination with others. Each digit could apply pressure highly accurately, but not equally so, while some performed better in combination than alone. 2- or 3-digit combinations were more precise than 4- or 5-digit combinations. Experiment 5 compared one-handed, multi-digit pressure input using all 5 digits to traditional two-handed multitouch gestures for a combined zooming and rotating map task. Results showed comparable performance, with multitouch being ~1% more accurate but pressure input being ~0.5sec faster, overall. Two experiments, one when sitting indoors and one when walking indoors tested how salient and subjectively comfortable/intense various forms of thermal stimulation were. Faster or larger changes were more salient, faster to detect and less comfortable and cold changes were more salient and faster to detect than warm changes. The two final studies designed two-dimensional structured ‘thermal icons’ that could convey two pieces of information. When indoors, icons were correctly identified with 83% accuracy. When outdoors, accuracy dropped to 69% when sitting and 61% when walking. This thesis provides the first detailed study of how precisely pressure can be applied to mobile devices when walking and provided with audio feedback and the first systematic study of how to design thermal feedback for interaction with mobile devices in mobile environments

    Enhanced sensor-based interaction techniques for mobile map-based applications

    Get PDF
    Mobile phones are increasingly being equipped with a wide range of sensors which enable a variety of interaction techniques. Sensor-based interaction techniques are particularly promising for domains such as map-based applications, where the user is required to interact with a large information space on the small screen of a mobile phone. Traditional interaction techniques have several shortcomings for interacting with mobile map-based applications. Keypad interaction offers limited control over panning speed and direction. Touch-screen interaction is often a two-handed form of interaction and results in the display being occluded during interaction. Sensor-based interaction provides the potential to address many of these shortcomings, but currently suffers from several limitations. The aim of this research was to propose enhancements to address the shortcomings of sensor-based interaction, with a particular focus on tilt interaction. A comparative study between tilt and keypad interaction was conducted using a prototype mobile map-based application. This user study was conducted in order to identify shortcomings and opportunities for improving tilt interaction techniques in this domain. Several shortcomings, including controllability, mental demand and practicality concerns were highlighted. Several enhanced tilt interaction techniques were proposed to address these shortcomings. These techniques were the use of visual and vibrotactile feedback, attractors, gesture zooming, sensitivity adaptation and dwell-time selection. The results of a comparative user study showed that the proposed techniques achieved several improvements in terms of the problem areas identified earlier. The use of sensor fusion for tilt interaction was compared to an accelerometer-only approach which has been widely applied in existing research. This evaluation was motivated by advances in mobile sensor technology which have led to the widespread adoption of digital compass and gyroscope sensors. The results of a comparative user study between sensor fusion and accelerometer-only implementations of tilt interaction showed several advantages for the use of sensor fusion, particularly in a walking context of use. Modifications to sensitivity adaptation and the use of tilt to perform zooming were also investigated. These modifications were designed to address controllability shortcomings identified in earlier experimental work. The results of a comparison between tilt zooming and Summary gesture zooming indicated that tilt zooming offered better results, both in terms of performance and subjective user ratings. Modifications to the original sensitivity adaptation algorithm were only partly successful. Greater accuracy improvements were achieved for walking tasks, but the use of dynamic dampening factors was found to be confusing. The results of this research were used to propose a framework for mobile tilt interaction. This framework provides an overview of the tilt interaction process and highlights how the enhanced techniques proposed in this research can be integrated into the design of tilt interaction techniques. The framework also proposes an application architecture which was implemented as an Application Programming Interface (API). This API was successfully used in the development of two prototype mobile applications incorporating tilt interaction

    Characteristics of Pressure-Based Input for Mobile Devices

    No full text
    We conducted a series of user studies to understand and clarify the fundamental characteristics of pressure in user interfaces for mobile devices. We seek to provide insight to clarify a longstanding discussion on mapping functions for pressure input. Previous literature is conflicted about the correct transfer function to optimize user performance. Our study results suggest that the discrepancy can be explained by different signal conditioning circuitry and with improved signal conditioning the user-performed precision relationship is linear. We also explore the effects of hand pose when applying pressure to a mobile device from the front, the back, or simultaneously from both sides in a pinching movement. Our results indicate that grasping type input outperforms single-sided input and is competitive with pressure input against solid surfaces. Finally we provide an initial exploration of non-visual multimodal feedback, motivated by the desire for eyes-free use of mobile devices. The findings suggest that non-visual pressure input can be executed without degradation in selection time but suffers from accuracy problems. Author Keywords Pressure input, tactile feedback, haptic feedback, mobile device
    corecore