129 research outputs found

    Pressure as a non-dominant hand input modality for bimanual interaction techniques on touchscreen tablets

    Get PDF
    Touchscreen tablet devices present an interesting challenge to interaction designers: they are not quite handheld like their smartphone cousins, though their form factor affords usage away from the desktop and other surfaces, requires a user to support a larger weight and navigate more screen space. Thus, the repertoire of touch input techniques is often reduced to those performable with one hand. Previous studies have suggested there are bimanual interaction techniques that offer both manual and cognitive benefits over equivalent unimanual techniques and that pressure is useful as a primary input modality on mobile devices and as an augmentation to finger/stylus input on touchscreens. However, there has been no research on the use of pressure as a modality to expand the range of bimanual input techniques on tablet devices. The first two experiments investigated bimanual scrolling on tablet devices, based on the premise that the control of scrolling speed and vertical scrolling direction could be thought of as separate tasks and that the current status quo of combining both into a single one- handed (unimanual) gesture on a touchscreen or on physical dial can be improved upon. Four bimanual scrolling techniques were compared to two status quo unimanual scrolling techniques in a controlled linear targeting task. The Dial and Slider bimanual technique was superior to the others in terms of Movement Time and the Dial and Pressure bimanual technique was superior in terms of Subjective Workload, suggesting that the bimanual scrolling techniques are better than the status quo unimanual techniques in terms of both performance and preference. The same interaction techniques were then evaluated using a photo browsing task that was chosen to resemble the way people browse their music collections when they are unsure about what they are looking for. These studies demonstrated that pressure is a more effective auxiliary modality than a touch slider in the context of bimanual scrolling techniques. These studies also demonstrated that the bimanual techniques did not provide any concrete benefits over the Unimanual touch scrolling technique, which is the status quo scrolling technique on commercially available touchscreen tablets and smartphones, in the context of an image browsing task. A novel investigation of pressure input was presented where it was characterised as a transient modality, one that has a natural inverse, bounce-back and a state that only persists during interaction. Two studies were carried out investigating the precision of applied pressure as part of a bimanual interaction, where the selection event is triggered by the dominant hand on the touchscreen (using existing touchscreen input gestures) with the goal of study- ing pressure as a functional primitive, without implying any particular application. Two aspects of pressure input were studied – pressure Targeting and Maintaining pressure over time. The results demonstrated that, using a combination of non-dominant hand pressure and dominant-hand touchscreen taps, overall pressure targeting accuracy was high (93.07%). For more complicated dominant-hand input techniques (swipe, pinch and rotate gestures), pressure targeting accuracy was still high (86%). The results demonstrated that participants were able to achieve high levels of pressure accuracy (90.3%) using DH swipe gestures (the simplest gesture in the study) suggesting that the ability to perform a simultaneous combination of pressure and touchscreen gesture input depends on the complexity of the dominant hand action involved. This thesis provides the first detailed study of the use of non-dominant hand pressure input to enable bimanual interaction techniques for tablet devices. It explores the use of pressure as a modality that can expand the range of available bimanual input techniques while the user is seated and comfortably holding the device and offers designers guidelines for including pressure as a non-dominant hand input modality for bimanual interaction techniques, in a way that supplements existing dominant-hand action

    Getting back to basics : bimanual interaction on mobile touch screen devices

    Get PDF
    The availability, and popularity, of touch screen tablets is drastically increasing with over 30% of internet users now owning one. However the lack of bimanual interaction in touch screen tablets is presenting product designers with serious challenges. Several attempts have been made to facilitate bimanual interaction in such products but results are not comparable to that of their non-mobile cousins, e.g. laptops. This paper presents the finding of a group collaboration aimed at prototyping a mobile touch screen device which supports bimanual interaction during internet browser navigation through rear mounted inputs. The researchers found it problematic to add basic bimanual interactions for internet browser navigation to the rear of a prototype mobile touch screen device due to issues regarding grip type, finger movement and hand position. This paper concludes that in order to achieve bimanual interaction researchers need to return to basics and consider how to free the hand and fingers from current constraints

    Getting Back To Basics: Bimanual Interaction on Mobile Touch Screen Devices

    Full text link

    Extending touch with eye gaze input

    Get PDF
    Direct touch manipulation with displays has become one of the primary means by which people interact with computers. Exploration of new interaction methods that work in unity with the standard direct manipulation paradigm will be of bene t for the many users of such an input paradigm. In many instances of direct interaction, both the eyes and hands play an integral role in accomplishing the user's interaction goals. The eyes visually select objects, and the hands physically manipulate them. In principle this process includes a two-step selection of the same object: users rst look at the target, and then move their hand to it for the actual selection. This thesis explores human-computer interactions where the principle of direct touch input is fundamentally changed through the use of eye-tracking technology. The change we investigate is a general reduction to a one-step selection process. The need to select using the hands can be eliminated by utilising eye-tracking to enable users to select an object of interest using their eyes only, by simply looking at it. Users then employ their hands for manipulation of the selected object, however they can manipulate it from anywhere as the selection is rendered independent of the hands. When a spatial o set exists between the hands and the object, the user's manual input is indirect. This allows users to manipulate any object they see from any manual input position. This fundamental change can have a substantial e ect on the many human-computer interactions that involve user input through direct manipulation, such as temporary touchscreen interactions. However it is unclear if, when, and how it can become bene cial to users of such an interaction method. To approach these questions, our research in this topic is guided by the following two propositions. The rst proposition is that gaze input can transform a direct input modality such as touch to an indirect modality, and with it provide new and powerful interaction capabilities. We develop this proposition in context of our investigation on integrated gaze interactions within direct manipulation user interfaces. We rst regard eye gaze for generic multi-touch displays, introducing Gaze-Touch as a technique based on the division of labour: gaze selects and touch manipulates. We investigate this technique with a design space analysis, protyping of application examples, and an informal user evaluation. The proposition is further developed by an exploration of hybrid eye and hand inputs with a stylus, for precise and cursor based indirect control; with bimanual input, to rapidly issue input from two hands to gaze-selected objects; with tablets, where Gaze-Touch enables one-handed interaction across the whole screen with the same hand that holds the device; and free-hand gesture in virtual reality to interact with any viewed object at a distance located in the virtual scene. Overall, we demonstrate that using eye gaze to enable indirect input yields many interaction bene ts, such as whole-screen reachability, occlusion-free manipulation, high precision cursor input, and low physical e ort. Integration of eye gaze with manual input raises new questions about how it can complement, instead of replace, the direct interactions users are familiar with. This is important to allow users the choice between direct and indirect inputs as each a ords distinct pros and cons for the usability of human-computer interfaces. These two input forms are normally considered separately from each other, but here we investigate interactions that combine those within the same interface. In this context, the second proposition is that gaze and touch input enables new and seamless ways of combining direct and indirect forms of interaction. We develop this proposition by regarding multiple interaction tasks that a user usually perform in a sequence, or simultaneously. First, we introduce a method to enable users switching between both input forms by implicitly exploiting visual attention during manual input. Direct input is active when looking at the input, and otherwise users will manipulate the object they look at indirectly. A design application for typical drawing and vector-graphics tasks has been prototyped to illustrate and explore this principle. The application contributes many example use cases, where direct drawing activities are complemented with indirect menu actions, precise cursor inputs, and seamless context switching at a glance. We further develop the proposition by investigating simultaneous direct and indirect input by bimanual input, where each input is assigned to one hand. We present an empirical study with an in-depth analysis of using indirect navigation in one hand, and direct pen drawing on the other. We extend this input constellation to tablet devices, by designing compound techniques for use in a more naturalistic setting when one hand holds the device. The interactions show that many typical tablet scenarios, such as browsing, map navigation, homescreen selections, or image gallery, can be enhanced through exploiting eye gaze

    Thumb + Pen Interaction on Tablets

    Get PDF
    ABSTRACT Modern tablets support simultaneous pen and touch input, but it remains unclear how to best leverage this capability for bimanual input when the nonpreferred hand holds the tablet. We explore Thumb + Pen interactions that support simultaneous pen and touch interaction, with both hands, in such situations. Our approach engages the thumb of the device-holding hand, such that the thumb interacts with the touch screen in an indirect manner, thereby complementing the direct input provided by the preferred hand. For instance, the thumb can determine how pen actions (articulated with the opposite hand) are interpreted. Alternatively, the pen can point at an object, while the thumb manipulates one or more of its parameters through indirect touch. Our techniques integrate concepts in a novel way that derive from marking menus, spring-loaded modes, indirect input, and multi-touch conventions. Our overall approach takes the form of a set of probes, each representing a meaningfully distinct class of application. They serve as an initial exploration of the design space at a level which will help determine the feasibility of supporting bimanual interaction in such contexts, and the viability of the Thumb + Pen techniques in so doing

    Improving expressivity in desktop interactions with a pressure-augmented mouse

    Get PDF
    Desktop-based Windows, Icons, Menus and Pointers (WIMP) interfaces have changed very little in the last 30 years, and are still limited by a lack of powerful and expressive input devices and interactions. In order to make desktop interactions more expressive and controllable, expressive input mechanisms like pressure input must be made available to desktop users. One way to provide pressure input to these users is through a pressure-augmented computer mouse; however, before pressure-augmented mice can be developed, design information must be provided to mouse developers. The problem we address in this thesis is that there is a lack of ergonomics and performance information for the design of pressure-augmented mice. Our solution was to provide empirical performance and ergonomics information for pressure-augmented mice by performing five experiments. With the results of our experiments we were able to identify the optimal design parameters for pressure-augmented mice and provide a set of recommendations for future pressure-augmented mouse designs

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Designing Hybrid Interactions through an Understanding of the Affordances of Physical and Digital Technologies

    Get PDF
    Two recent technological advances have extended the diversity of domains and social contexts of Human-Computer Interaction: the embedding of computing capabilities into physical hand-held objects, and the emergence of large interactive surfaces, such as tabletops and wall boards. Both interactive surfaces and small computational devices usually allow for direct and space-multiplex input, i.e., for the spatial coincidence of physical action and digital output, in multiple points simultaneously. Such a powerful combination opens novel opportunities for the design of what are considered as hybrid interactions in this work. This thesis explores the affordances of physical interaction as resources for interface design of such hybrid interactions. The hybrid systems that are elaborated in this work are envisioned to support specific social and physical contexts, such as collaborative cooking in a domestic kitchen, or collaborative creativity in a design process. In particular, different aspects of physicality characteristic of those specific domains are explored, with the aim of promoting skill transfer across domains. irst, different approaches to the design of space-multiplex, function-specific interfaces are considered and investigated. Such design approaches build on related work on Graspable User Interfaces and extend the design space to direct touch interfaces such as touch-sensitive surfaces, in different sizes and orientations (i.e., tablets, interactive tabletops, and walls). These approaches are instantiated in the design of several experience prototypes: These are evaluated in different settings to assess the contextual implications of integrating aspects of physicality in the design of the interface. Such implications are observed both at the pragmatic level of interaction (i.e., patterns of users' behaviors on first contact with the interface), as well as on user' subjective response. The results indicate that the context of interaction affects the perception of the affordances of the system, and that some qualities of physicality such as the 3D space of manipulation and relative haptic feedback can affect the feeling of engagement and control. Building on these findings, two controlled studies are conducted to observe more systematically the implications of integrating some of the qualities of physical interaction into the design of hybrid ones. The results indicate that, despite the fact that several aspects of physical interaction are mimicked in the interface, the interaction with digital media is quite different and seems to reveal existing mental models and expectations resulting from previous experience with the WIMP paradigm on the desktop PC

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally
    corecore