122 research outputs found

    Investigating Performance and Usage of Input Methods for Soft Keyboard Hotkeys

    Get PDF
    Touch-based devices, despite their mainstream availability, do not support a unified and efficient command selection mechanism, available on every platform and application. We advocate that hotkeys, conventionally used as a shortcut mechanism on desktop computers, could be generalized as a command selection mechanism for touch-based devices, even for keyboard-less applications. In this paper, we investigate the performance and usage of soft keyboard shortcuts or hotkeys (abbreviated SoftCuts) through two studies comparing different input methods across sitting, standing and walking conditions. Our results suggest that SoftCuts not only are appreciated by participants but also support rapid command selection with different devices and hand configurations. We also did not find evidence that walking deters their performance when using the Once input method.Comment: 17+2 pages, published at Mobile HCI 202

    Improving Multi-Touch Interactions Using Hands as Landmarks

    Get PDF
    Efficient command selection is just as important for multi-touch devices as it is for traditional interfaces that follow the Windows-Icons-Menus-Pointers (WIMP) model, but rapid selection in touch interfaces can be difficult because these systems often lack the mechanisms that have been used for expert shortcuts in desktop systems (such as keyboards shortcuts). Although interaction techniques based on spatial memory can improve the situation by allowing fast revisitation from memory, the lack of landmarks often makes it hard to remember command locations in a large set. One potential landmark that could be used in touch interfaces, however, is people’s hands and fingers: these provide an external reference frame that is well known and always present when interacting with a touch display. To explore the use of hands as landmarks for improving command selection, we designed hand-centric techniques called HandMark menus. We implemented HandMark menus for two platforms – one version that allows bimanual operation for digital tables and another that uses single-handed serial operation for handheld tablets; in addition, we developed variants for both platforms that support different numbers of commands. We tested the new techniques against standard selection methods including tabbed menus and popup toolbars. The results of the studies show that HandMark menus perform well (in several cases significantly faster than standard methods), and that they support the development of spatial memory. Overall, this thesis demonstrates that people’s intimate knowledge of their hands can be the basis for fast interaction techniques that improve performance and usability of multi-touch systems

    Investigating Performance and Usage of Input Methods for Soft Keyboard Hotkeys

    Get PDF
    International audienceTouch-based devices, despite their mainstream availability, do not support a unified and efficient command selection mechanism, available on every platform and application. We advocate that hotkeys, conventionally used as a shortcut mechanism on desktop computers, could be generalized as a command selection mechanism for touch-based devices, even for keyboard-less applications. In this paper, we investigate the performance and usage of soft keyboard shortcuts or hotkeys (abbreviated SoftCuts) through two studies comparing different input methods across sitting, standing and walking conditions. Our results suggest that SoftCuts not only are appreciated by participants but also support rapid command selection with different devices and hand configurations. We also did not find evidence that walking deters their performance when using the Once input method

    Designing for Effective Freehand Gestural Interaction

    Get PDF

    Assessing the effectiveness of direct gesture interaction for a safety critical maritime application

    Get PDF
    Multi-touch interaction, in particular multi-touch gesture interaction, is widely believed to give a more natural interaction style. We investigated the utility of multi-touch interaction in the safety critical domain of maritime dynamic positioning (DP) vessels. We conducted initial paper prototyping with domain experts to gain an insight into natural gestures; we then conducted observational studies aboard a DP vessel during operational duties and two rounds of formal evaluation of prototypes - the second on a motion platform ship simulator. Despite following a careful user-centred design process, the final results show that traditional touch-screen button and menu interaction was quicker and less erroneous than gestures. Furthermore, the moving environment accentuated this difference and we observed initial use problems and handedness asymmetries on some multi-touch gestures. On the positive side, our results showed that users were able to suspend gestural interaction more naturally, thus improving situational awareness

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Comparing Free Hand Menu Techniques for Distant Displays using Linear, Marking and Finger-Count Menus

    Get PDF
    Part 1: Long and Short PapersInternational audienceDistant displays such as interactive Public Displays (IPD) or Interactive Television (ITV) require new interaction techniques as traditional input devices may be limited or missing in these contexts. Free hand interaction, as sensed with computer vision techniques, presents a promising interaction technique. This paper presents the adaptation of three menu techniques for free hand interaction: Linear menu, Marking menu and Finger-Count menu. The first study based on a Wizard-of-OZ protocol focuses on Finger-Counting postures in front of interactive television and public displays. It reveals that participants do choose the most efficient gestures neither before nor after the experiment. Results are used to develop a Finger-Count recognizer. The second experiment shows that all techniques achieve satisfactory accuracy. It also shows that Finger-Count requires more mental demand than other techniques.</p

    An Exploration of Multi-touch Interaction Techniques

    Get PDF
    Research in multi-touch interaction has typically been focused on direct spatial manipulation; techniques have been created to result in the most intuitive mapping between the movement of the hand and the resultant change in the virtual object. As we attempt to design for more complex operations, the effectiveness of spatial manipulation as a metaphor becomes weak. We introduce two new platforms for multi-touch computing: a gesture recognition system, and a new interaction technique. I present Multi-Tap Sliders, a new interaction technique for operation in what we call non-spatial parametric spaces. Such spaces do not have an obvious literal spatial representation, (Eg.: exposure, brightness, contrast and saturation for image editing). The multi-tap sliders encourage the user to keep her visual focus on the tar- get, instead of requiring her to look back at the interface. My research emphasizes ergonomics, clear visual design, and fluid transition between modes of operation. Through a series of iterations, I develop a new technique for quickly selecting and adjusting multiple numerical parameters. Evaluations of multi-tap sliders show improvements over traditional sliders. To facilitate further research on multi-touch gestural interaction, I developed mGestr: a training and recognition system using hidden Markov models for designing a multi-touch gesture set. Our evaluation shows successful recognition rates of up to 95%. The recognition framework is packaged into a service for easy integration with existing applications

    The Roly-Poly Mouse: Designing a Rolling Input Device Unifying 2D and 3D Interaction

    Get PDF
    International audienceWe present the design and evaluation of the Roly-Poly Mouse (RPM), a rolling input device that combines the advantages of the mouse (position displacement) and of 3D devices (roll and rotation) to unify 2D and 3D interaction. Our first study explores RPM gesture amplitude and stability for different upper shapes (Hemispherical, Convex) and hand postures. 8 roll directions can be performed precisely and their amplitude is larger on Hemispherical RPM. As minor rolls affect translation, we propose a roll correction algorithm to support stable 2D pointing with RPM. We propose the use of compound gestures for 3D pointing and docking, and evaluate them against a commercial 3D device, the SpaceMouse. Our studies reveal that RPM performs 31% faster than the SpaceMouse for 3D pointing and equivalently for 3D rotation. Finally, we present a proof-of-concept integrated RPM prototype along with discussion on the various technical challenges to overcome to build a final integrated version of RPM
    • 

    corecore