11 research outputs found

    Improving Multi-Touch Interactions Using Hands as Landmarks

    Get PDF
    Efficient command selection is just as important for multi-touch devices as it is for traditional interfaces that follow the Windows-Icons-Menus-Pointers (WIMP) model, but rapid selection in touch interfaces can be difficult because these systems often lack the mechanisms that have been used for expert shortcuts in desktop systems (such as keyboards shortcuts). Although interaction techniques based on spatial memory can improve the situation by allowing fast revisitation from memory, the lack of landmarks often makes it hard to remember command locations in a large set. One potential landmark that could be used in touch interfaces, however, is people’s hands and fingers: these provide an external reference frame that is well known and always present when interacting with a touch display. To explore the use of hands as landmarks for improving command selection, we designed hand-centric techniques called HandMark menus. We implemented HandMark menus for two platforms – one version that allows bimanual operation for digital tables and another that uses single-handed serial operation for handheld tablets; in addition, we developed variants for both platforms that support different numbers of commands. We tested the new techniques against standard selection methods including tabbed menus and popup toolbars. The results of the studies show that HandMark menus perform well (in several cases significantly faster than standard methods), and that they support the development of spatial memory. Overall, this thesis demonstrates that people’s intimate knowledge of their hands can be the basis for fast interaction techniques that improve performance and usability of multi-touch systems

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Contact-sensing Input Device Manipulation and Recall

    Get PDF
    We study a cuboid tangible pen-like input device similar to Vogel and Casiez’s Conte. A conductive 3D-printed Conte device enables touch sensing on a capacitive display, and orientation data from an enclosed inertial measurement unit (IMU) reliably distinguishes all 26 corners, edges, and sides. The device’s size is constrained by hardware required for sensing. We evaluate the impact of size form-factor on manipulation times for contact-to-contact transitions. A controlled experiment logs manipulation times performed with three sizes of 3D printed mock-ups of the device. Computer vision techniques reliably distinguish between all 26 possible contacts, and a resistive touch sensor provides accurate timing information. In addition, a transition to touch input is tested, and a mock-up of a digital pen is included as a baseline comparison. Results show larger devices are faster, contact-to-contact transition time increases with distance between contacts, but transitions to barrel edges can be slower than some end-over-end transitions. A comparison with a pen-shaped baseline indicates no loss in transition speed for most equivalent transitions. Based on our results, we discuss ideal device sizes and improvements to the simple extruded-rectangle form-factor. Subsequently, we evaluate learning and recall of commands located on physical landmarks on the exterior of a 3D tangible input device in comparison with a 2D spatial interface. Each of the 26 contacts is a physical spatial landmark on the exterior of Conte. A pilot study compares command learning and recall for Conte with a 2D grid interface, using small and large commands sets. To facilitate novice learning, an on-screen model of Conte replicates the physical device’s orientation and displays icons representing commands on the corresponding landmarks. Results show there is likely no difference between 2D and 3D spatial interface recall for a small command set and high recall is possible with large command sets. Applications illustrating possible use cases are discussed as well as possible improvements to the on-screen guide based on our results

    Leveraging finger identification to integrate multi-touch command selection and parameter manipulation

    Get PDF
    International audienceIdentifying which fingers are touching a multi-touch surface provides a very large input space. We describe FingerCuts, an interaction technique inspired by desktop keyboard shortcuts to exploit this potential. FingerCuts enables integrated command selection and parameter manipulation, it uses feed-forward and feedback to increase discoverability, it is backward compatible with current touch input techniques, and it is adaptable for different touch device form factors. We implemented three variations of FingerCuts, each tailored to a different device form factor: tabletop, tablet, and smartphone. Qualitative and quantitative studies conducted on the tabletop suggests that with some practice, FingerCuts is expressive, easy-to-use, and increases a sense of continuous interaction flow and that interaction with FingerCuts is as fast, or faster than using a graphical user interface. A theoretical analysis of FingerCuts using the Fingerstroke-Level Model (FLM) matches our quantitative study results, justifying our use of FLM to analyse and validate the performance for the other device form factors

    IMPROVING REVISITATION IN LONG DOCUMENTS WITH TWO-LEVEL ARTIFICIAL-LANDMARK SCROLLBARS

    Get PDF
    Revisitation – returning to previously-visited locations in a document – is commonly done in the digital world. While linear navigation controls provide a spatial representation of the document and allow effective navigation in short documents, they are not effective in long documents, particularly for revisitation. Bookmarks, search and history dialogs, and “read wear” (visual marks left as the user interacts with the document) can all assist revisitation; however, for long documents all of these tools are limited in terms of effort, clutter, and interpretability. Inspired by visual cues such as coloured edges and “thumb indents” in hardcopy books, recent work has proposed artificial landmarks to help users build up natural spatial memory for the locations in a document; in long documents, however, this technique is also limited because of the number of pages each landmark represents. To address this problem, this thesis proposes a Double-Scrollbar design that uses two columns of artificial landmarks that can provide greater specificity for spatial memory and revisitation in long documents. We developed three versions of landmark-augmented Double-Scrollbar, using icons, letters, and digits as landmarks. To assess the performance and usability of the Double-Scrollbar design, two studies were conducted with 21 participants, each visiting and revisiting pages of a long document using each of the new designs, as well as a single-column design and a standard scrollbar. Results showed that two levels of icon landmarks were significantly better for assisting revisitation, and were preferred by participants. The two-level artificial-landmark scrollbar is a new way of improving revisitation in long documents by assisting the formation of more precise spatial memories about document locations

    Use of Landmarks to Improve Spatial Learning and Revisitation in Computer Interfaces

    Get PDF
    Efficient spatial location learning and remembering are just as important for two-dimensional Graphical User Interfaces (GUI) as they are for real environments where locations are revisited multiple times. Rapid spatial memory development in GUIs, however, can be difficult because these interfaces often lack adequate landmarks that have been predominantly used by people to learn and recall real-life locations. In the absence of sufficient landmarks in GUIs, artificially created visual objects (i.e., artificial landmarks) could be used as landmarks to support spatial memory development of spatial locations. In order to understand how spatial memory development occurs in GUIs and explore ways to assist users’ efficient location learning and recalling in GUIs, I carried out five studies exploring the use of landmarks in GUIs – one study that investigated interfaces of four standard desktop applications: Microsoft Word, Facebook, Adobe Photoshop, and Adobe Reader, and other four that tested artificial landmarks augmented two prototype desktop GUIs against non-landmarked versions: command selection interfaces and linear document viewers; in addition, I tested landmarks’ use in variants of these interfaces that varied in the number of command sets (small, medium, and large) and types of linear documents (textual and video). Results indicate that GUIs’ existing features and design elements can be reliable landmarks in GUIs that provide spatial benefits similar to real environments. I also show that artificial landmarks can significantly improve spatial memory development of GUIs, allowing support for rapid spatial location learning and remembering in GUIs. Overall, this dissertation reveals that landmarks can be a valuable addition to graphical systems to improve the memorability and usability of GUIs

    Enabling Expressive Keyboard Interaction with Finger, Hand, and Hand Posture Identification

    Get PDF
    The input space of conventional physical keyboards is largely limited by the number of keys. To enable more actions than simply entering the symbol represented by a key, standard keyboards use combinations of modifier keys such as command, alternate, or shift to re-purpose the standard text entry behaviour. To explore alternatives to conventional keyboard shortcuts and enable more expressive keyboard interaction, this thesis first presents Finger-Aware Shortcuts, which encode information from finger, hand, and hand posture identification as keyboard shortcuts. By detecting the hand and finger used to press a key, and an open or closed hand posture, a key press can have multiple command mappings. A formative study revealed the performance and preference patterns when using different fingers and postures to press a key. The results were used to develop a computer vision algorithm to identify fingers and hands on a keyboard captured by a built-in laptop camera and a reflector. This algorithm was built into a background service to enable system-wide Finger-Aware Shortcut keys in any application. A controlled experiment used the service to compare the performance of Finger-Aware Shortcuts with existing methods. The results showed that Finger-Aware Shortcuts are comparable with a common class of shortcuts using multiple modifier keys. Several application demonstrations illustrate different use cases and mappings for Finger-Aware Shortcuts. To further explore how introducing finger awareness can help foster the learning and use of keyboard shortcuts, an interview study was conducted with expert computer users to identify the likely causes that hinder the adoption of keyboard shortcuts. Based on this, the concept of Finger-Aware Shortcuts is extended and two guided keyboard shortcut techniques are proposed: FingerArc and FingerChord. The two techniques provide dynamic visual guidance on the screen when users press and hold an alphabetical key semantically related to a set of commands. FingerArc differentiates these commands by examining the angle between the thumb and index finger; FingerChord differentiates these commands by allowing users to press different key areas using a second finger. The thesis contributes comprehensive evaluations of Finger-Aware Shortcuts and proof-of-concept demonstrations of FingerArc and FingerChord. Together, they contribute a novel interaction space that expands the conventional keyboard input space with more expressivity

    Manipulation, Learning, and Recall with Tangible Pen-Like Input

    Get PDF
    International audienceWe examine two key human performance characteristics of a pen-like tangible input device that executes a different command depending on which corner, edge, or side contacts a surface. The manipulation time when transitioning between contacts is examined using physical mock-ups of three representative device sizes and a baseline pen mock-up. Results show the largest device is fastest overall and minimal differences with a pen for equivalent transitions. Using a hardware prototype able to sense all 26 different contacts, a second experiment evaluates learning and recall. Results show almost all 26 contacts can be learned in a two-hour session with an average of 94% recall after 24 hours. The results provide empirical evidence for the practicality, design, and utility for this type of tangible pen-like input

    Understanding Mode and Modality Transfer in Unistroke Gesture Input

    Get PDF
    Unistroke gestures are an attractive input method with an extensive research history, but one challenge with their usage is that the gestures are not always self-revealing. To obtain expertise with these gestures, interaction designers often deploy a guided novice mode -- where users can rely on recognizing visual UI elements to perform a gestural command. Once a user knows the gesture and associated command, they can perform it without guidance; thus, relying on recall. The primary aim of my thesis is to obtain a comprehensive understanding of why, when, and how users transfer from guided modes or modalities to potentially more efficient, or novel, methods of interaction -- through symbolic-abstract unistroke gestures. The goal of my work is to not only study user behaviour from novice to more efficient interaction mechanisms, but also to expand upon the concept of intermodal transfer to different contexts. We garner this understanding by empirically evaluating three different use cases of mode and/or modality transitions. Leveraging marking menus, the first piece investigates whether or not designers should force expertise transfer by penalizing use of the guided mode, in an effort to encourage use of the recall mode. Second, we investigate how well users can transfer skills between modalities, particularly when it is impractical to present guidance in the target or recall modality. Lastly, we assess how well users' pre-existing spatial knowledge of an input method (the QWERTY keyboard layout), transfers to performance in a new modality. Applying lessons from these three assessments, we segment intermodal transfer into three possible characterizations -- beyond the traditional novice to expert contextualization. This is followed by a series of implications and potential areas of future exploration spawning from our work
    corecore