7 research outputs found

    The efficiency of multimodal interaction for a map-based task

    Get PDF

    The Efficiency of Multimodal Interaction for a Map-based Task

    No full text
    This paper compares the efficiency of using a standard direct-manipulation graphical user interface (GUI) with that of using the QuickSet pen/voice multimodal interface for supporting a military task. In this task, a user places military units and control measures (e.g., various types of lines, obstacles, objectives) on a map. Four military personnel designed and entered their own simulation scenarios via both interfaces. Analyses revealed that the multimodal interface led to an average 3.5-fold speed improvement in the average entity creation time, including all error handling. The mean time to repair errors also was 4.3 times faster when interacting multimodally. Finally, all subjects reported a strong preference for multimodal interaction. These results indicate a substantial efficiency advantage for multimodal over GUI-based interaction during map-based tasks

    Topic Area: Spoken Language and multimodal systems, Evaluation of performance of complete NLP systems

    No full text
    This paper compares the efficiency of using a standard direct-manipulation graphical user interface (GUI) with that of using the QuickSet pen/voice multimodal interface for supporting a military task. In this task, a user places military units and control measures (e.g., various types of lines, obstacles, objectives) on a map. Four military personnel designed and entered their own simulation scenarios via both interfaces. Analyses revealed that the multimodal interface led to an average 3.5-fold speed improvement in the average entity creation time, including all error handling. The mean time to repair errors also was 4.3 times faster when interacting multimodally. Finally, all subjects reported a strong preference for multimodal interaction. These results indicate a substantial efficiency advantage for multimodal over GUI-based interaction during map-based tasks.

    QuickSet: A Multimodal Interface for Distributed Interactive Simulation

    No full text
    Introduction We demonstrate QuickSet, a wireless, handheld, collaborative, system that can be used to control distributed interactive simulations based on the ModSAF simulator [3] and a 3-D terrain visualization system called CommandVu. Together with the CommandTalk spoken interaction component from SRI International [4] these form the LeatherNet system [1]. With QuickSet, users can formulate a military scenario by creating, positioning, and editing units, supplying them with behavior, specifying fortifications, objectives and other points, etc. In contrast to the original ModSAF GUI, users of QuickSet can employ multiple input modalities, including speech, gesture, and direct manipulation, to suit the task and situation at hand. QuickSet operates on a 3-lb. handheld 100MHz 486 PC (Fujitsu Stylistic 1000), as well as desktop PC's, employing wireless LAN communications, color screen, microphone, pen stylus, onboard speech recognition, gesture recognition, natural language processing,

    QuickSet: Multimodal Interaction for Simulation Set-up and Control

    No full text
    This paper presents a novel multimodal system applied to the setup and control of distributed interactive simulations
    corecore