2 research outputs found

    Understanding Mode and Modality Transfer in Unistroke Gesture Input

    Get PDF
    Unistroke gestures are an attractive input method with an extensive research history, but one challenge with their usage is that the gestures are not always self-revealing. To obtain expertise with these gestures, interaction designers often deploy a guided novice mode -- where users can rely on recognizing visual UI elements to perform a gestural command. Once a user knows the gesture and associated command, they can perform it without guidance; thus, relying on recall. The primary aim of my thesis is to obtain a comprehensive understanding of why, when, and how users transfer from guided modes or modalities to potentially more efficient, or novel, methods of interaction -- through symbolic-abstract unistroke gestures. The goal of my work is to not only study user behaviour from novice to more efficient interaction mechanisms, but also to expand upon the concept of intermodal transfer to different contexts. We garner this understanding by empirically evaluating three different use cases of mode and/or modality transitions. Leveraging marking menus, the first piece investigates whether or not designers should force expertise transfer by penalizing use of the guided mode, in an effort to encourage use of the recall mode. Second, we investigate how well users can transfer skills between modalities, particularly when it is impractical to present guidance in the target or recall modality. Lastly, we assess how well users' pre-existing spatial knowledge of an input method (the QWERTY keyboard layout), transfers to performance in a new modality. Applying lessons from these three assessments, we segment intermodal transfer into three possible characterizations -- beyond the traditional novice to expert contextualization. This is followed by a series of implications and potential areas of future exploration spawning from our work

    Designing Guiding Systems for Gesture-Based Interaction

    No full text
    International audience2D or 3D gesture commands are still not routinely adopted, despite the technological advances for tracking gestures. The fact that gesture commands are not self-revealing is a bottleneck for this adoption. Guiding novice users is therefore crucial in order to reveal what commands are available and how to trigger them. However guiding systems are mainly designed in an ad hoc manner. Even if isolated design characteristics exist, they concentrate on a limited number of guidance aspects. We hence present a design space that unifies and completes these studies by providing a coherent set of issues for designing the behavior of a guiding system. We distinguish Feedback and Feedforward and consider four questions: When, What, How and Where. In order to leverage efficient use of our design space, we provide an online tool and illustrate with scenarios how practitioners can use it
    corecore