1 research outputs found

    Word Learning In A Multimodal Environment

    No full text
    We are creating human machine interfaces which let people communicate with machines using natural modalities including speech and gesture. A problem with current multimodal interfaces is that users are forced to learn the set of words and gestures which the interface understands. We report on a trainable interface which lets the user teach the system words of their choice through natural multimodal interactions. 1. PROBLEM Most current human-machine interfaces which use natural modalities such as speech and gesture force the user to learn which words and gestures the system understands before the system can be used (see [9], [10], or [4]; a notable exception is [2]). For example, an interface designer who wishes to use speech input must choose the vocabulary which the system will understand. If the user strays from this vocabulary, the system will not respond correctly. The semantics of the words must also be defined by the interface designer but may also not match the expectations o..
    corecore