4 research outputs found
Fishing or a Z?: investigating the effects of error on mimetic and alphabet device-based gesture interaction
While gesture taxonomies provide a classification of device-based gestures in terms of communicative intent, little work has addressed the usability differences in manually performing these gestures. In this primarily qualitative study, we investigate how two sets of iconic gestures that vary in familiarity, mimetic and alphabetic, are affected under varying failed recognition error rates (0-20%, 20-40%, 40-60%). Drawing on experiment logs, video observations, subjects' feedback, and a subjective workload assessment questionnaire, results revealed two main findings: a) mimetic gestures tend to evolve into diverse variations (within the activities they mimic) under high error rates, while alphabet gestures tend to become more rigid and structured and b) mimetic gestures were tolerated under recognition error rates of up to 40%, while alphabet gestures incur significant overall workload with up to only 20% error rates. Thus, while alphabet gestures are more robust to recognition errors in keeping their signature, mimetic gestures are more robust to recognition errors from a usability and user experience standpoint, and thus better suited for inclusion into mainstream device-based gesture interaction with mobile phones
Recommended from our members
Perceptible affordances and feedforward for gestural interfaces: Assessing effectiveness of gesture acquisition with unfamiliar interactions
The move towards touch-based interfaces disrupts the established ways in which users manipulate and control graphical user interfaces. The predominant mode of interaction established by the desktop interface is to ‘double-click’ an icon in order to open an application, file or folder. Icons show users where to click and their shape, colour and graphic style suggests how they respond to user action. In sharp contrast, in a touch-based interface, an action may require a user to form a gesture with a certain number of fingers, a particular movement, and in a specific place. Often, none of this is suggested in the interface.
This thesis adopts the approach of research through design to address the problem of how to inform the user about which gestures are available in a given touch-based interface, how to perform each gesture, and, finally, the effect of each gesture on the underlying system. Its hypothesis is that presenting automatic and animated visual prompts that depict touch and preview gesture execution will mitigate the problems users encounter when they execute commands within unfamiliar gestural interfaces. Moreover, the thesis claims the need for a new framework to assess the efficiency of gestural UI designs. A significant aspect of this new framework is a rating system that was used to assess distinct phases within the users’ evaluation and execution of a gesture.
In order to support the thesis hypothesis, two empirical studies were conducted. The first introduces the visual prompts in support of training participants in unfamiliar gestures and gauges participants’ interpretation of their meaning. The second study consolidates the design features that yielded fewer error rates in the first study and assesses different interaction techniques, such as the moment to display the visual prompt. Both studies demonstrate the benefits in providing visual prompts to improve user awareness of available gestures. In addition, both studies confirm the efficiency of the rating system in identifying the most common problems users have with gestures and identifying possible design features to mitigate such problems.
The thesis contributes: 1) a gesture-and-effect model and a corresponding rating system that can be used to assess gestural user interfaces, 2) the identification of common problems users have with unfamiliar gestural interfaces and design recommendations to mitigate these problems, and 3) a novel design technique that will improve user awareness of unfamiliar gestures within novel gestural interfaces