Skip to main content
Article thumbnail
Location of Repository

A Toolkit for Multimodal Interface Design: An Empirical Investigation

By Dimitrios I. Rigas and M. Alsuraihi


NoThis paper introduces a comparative multi-group study carried out to investigate the use of multimodal interaction metaphors (visual, oral, and aural) for improving learnability (or usability from first time use) of interface-design environments. An initial survey was used for taking views about the effectiveness and satisfaction of employing speech and speech-recognition for solving some of the common usability problems. Then, the investigation was done empirically by testing the usability parameters: efficiency, effectiveness, and satisfaction of three design-toolkits (TVOID, OFVOID, and MMID) built especially for the study. TVOID and OFVOID interacted with the user visually only using typical and time-saving interaction metaphors. The third environment MMID added another modality through vocal and aural interaction. The results showed that the use of vocal commands and the mouse concurrently for completing tasks from first time use was more efficient and more effective than the use of visual-only interaction metaphors

Topics: Speech recognition, Text-to-speech, Interface design, Usability, Learnability, Effectiveness, Efficiency, Satisfaction, Visual, oral, Aural, Multimodal, Auditory-icons, Earcons, Speech, Voice-instruction
Year: 2007
DOI identifier: 10.1007/978-3-540-73110-8_21
OAI identifier:
Provided by: Bradford Scholars
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.