Location of Repository

Expression Constraints in Multimodal Human-Computer Interaction

By Sandrine Robbe-reiter, Noëlle Carbonell and Pierre Dauchy

Abstract

Thanks to recent scientific advances, it is now possible to design multimodal interfaces allowing the use of speech and gestures on a touchscreen. However, present speech recognizers and natural language interpreters cannot yet process spontaneous speech accurately. These limitations make it necessary to impose constraints on users' speech inputs. Thus, ergonomic studies are needed to provide user interface designers with efficient guidelines for the definition of usable speech constraints. We evolved a method for designing oral and multimodal (speech + 2D gestures) command languages, which could be interpreted reliably by present systems, and easy to learn through human-computer interaction (HCI). The empirical study presented here contributes to assessing the usability of such artificial languages in a realistic software environment. Analyses of the multimodal protocols collected indicate that all subjects were able to assimilate rapidly the given expression constraints, mainly whi..

Topics: speech constraints, usability CONTEXT, MOTIVATION, AND OBJECTIVES
Publisher: ACM Press
Year: 2000
OAI identifier: oai:CiteSeerX.psu:10.1.1.36.1503
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://lieber.www.media.mit.ed... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.