2 research outputs found

    Von Textgeneratoren zu Intellimedia-Präsentationssystemen

    Get PDF
    Während Textgeneratoren ausschließlich auf das Medium Sprache zurückgreifen, nutzen Intellimedia-Präsentationssysteme die individuellen Stärken unterschiedlicher Medien, wie Text, Graphik, Gestik und Animation, für die Informationsdarbietung. Die aus kommunikationstheoretischer Sicht allgemeinere Aufgabe wirft einerseits neue interessante Probleme auf, etwa die Selektion und die Koordination von Medien, führt aber andererseits zu einer umfassenderen Behandlung von Fragestellungen, die bereits von der Textgenerierung her bekannt sind. Dieses Papier stellt die erste Generation von sprachverarbeitenden Intellimedia Präsentationssystemen vor, skizziert die neuen Problemstellungen und beschäftigt sich insbesondere mit der Frage, inwiefern sich Methoden zur Textgenerierung verallgemeinern lassen, damit sie für die Informationspräsentation mit mehreren Medien anwendbar sind.While text generators exclusively rely on a single medium, intellimedia presentation systems take advantage of the individual strength of several media, such as text, graphics, gestures and animations, to present information. On the one hand, new interesting problems arise in the broader context of multimedia communication, in particular the selection and the coordination of media. On the other hand, this research leads to a more general treatment of problems already known from text generation. The paper discusses the first generation of NL processing intellimedia presentation systems and sketches the new problems. Particular emphasis is given to the question of how to generalize methods for text generation in such a way that they become useful for the production of multimedia presentations, too

    Integrating Natural Language Components into Graphical Discourse

    No full text
    In our current research into the design of cognitively well-motivated interfaces relying primarily on the display of graphical information, we have observed that graphical information alone does not provide sufficient support to users particularly when situations arise that do not simply conform to the users' expectations. This can occur due to too much information being requested, too little, information of the wrong kind, etc. To solve this problem, we are working towards the integration of natural language generation to augment the interaction functionalities of the interface. This is intended to support the generation of flexible natural language utterances which pinpoint possible problems with a user's request and which further go on to outline the user's most sensible courses of action away from the problem. In this paper, we describe our first prototype, where we combine the graphical and interaction planning capabilities of our graphical information system SIC! with the text generation capabilities of the Penman system. We illustrate the need for such a combined system, and also give examples of how a general natural language facility beneficially augments the user's ability to navigate a knowledge base graphically
    corecore