Skip to main content
Article thumbnail
Location of Repository

Mind: A context-based multimodal interpretation framework in conversational systems

By Joyce Y. Chai, Shimei Pan and Michelle X. Zhou

Abstract

Abstract In a multimodal human-machine conversation, user inputs are often abbreviated or imprecise. Simply fusing multimodal inputs together may not be sufficient to derive a complete understanding of the inputs. Aiming to handle a wide variety of multimodal inputs, we are building a context-based multimodal interpretation framework called MIND (Multimodal Interpreter for Natural Dialog). MIND is unique in its use of a variety of contexts, such as domain context and conversation context, to enhance multimodal interpretation. In this chapter, we first describe a fine-grained semantic representation that captures salient information from user inputs and the overall conversation, and then present a context-based interpretation approach that enables MIND to reach a full understanding of user inputs, including those abbreviated or imprecise ones

Topics: Multimodal input interpretation, multimodal interaction, conversation systems
Publisher: Kluwer Academic Publishers
Year: 2005
OAI identifier: oai:CiteSeerX.psu:10.1.1.134.5829
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.cse.msu.edu/~jchai/... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.