16 research outputs found
Recommended from our members
Using Analogies in Natural Language Generation
Any system with explanatory capabilities must be able to generate descriptions of concepts defined in its knowledge base. The use of analogies to highlight selected features in these descriptions can greatly enhance their effectiveness, as analogies are a powerful and compact means of communicating ideas and descriptions. In this paper, w e describe a system that can make use of analogies in generating descriptions. W e outline the differences between using analogies in problem solving and using them in language generation, and show h o w the discourse structure kept by our generation system provides knowledge that aids finding an acceptable analogy to express
Generating Explanatory Captions for Information Graphics
Graphical presentations can be used to communicate information in relational data sets succinctly and effectively. However, novel graphical presentations about numerous attributes and their relationships are often difficult to understand completely until explained. Automatically generated graphical presentations must therefore either be limited to simple, conventional ones, or risk incomprehensibility. One way of alleviating this problem is to design graphical presentation systems that can work in conjunction with a natural language generator to produce "explanatory captions." This paper presents three strategies for generating explanatory captions to accompany information graphics based on: (1) a representation of the structure of the graphical presentation (2) a framework for identifyingthe perceptual complexity of graphical elements, and (3) the structure of the data expressed in the graphic. We describe an implemented system and illustrate how it is used to generate explanatory cap..
Detecting Knowledge Base Inconsistencies Using Automated Generation of Text and Examples
Verifying the fidelity of domain representation in large knowledge bases (KBs) is a difficult problem: domain experts are typically not experts in knowledge representation languages, and as knowledge bases grow more complex, visual inspection of the various terms and their abstract definitions, their inter-relationships and the limiting, boundary cases becomes much harder. This paper presents an approach to help verify and refine abstract term definitions in knowledge bases. It assumes that it is easier for a domain expert to determine the correctness of individual concrete examples than it is to verify and correct all the ramifications of an abstract, intensional specification. To this end, our approach presents the user with an interface in which abstract terms in the KB are described using examples and natural language generated from the underlying domain representation. Problems in the KB are therefore manifested as problems in the generated description. The user ca..
Recommended from our members
Categorizing Example Types in Context: Applications for the Generation of Tutorial Descriptions
Different situations may require the presentation of
different types of examples. For instance, some sit-
uations require the presentation of positive examples
only, while others require both positive and nega-
tive examples. Furthermore, different examples often
have specific presentation requirements: they need
to appear in an appropriate sequence, be introduced
properly and often require associated prompts. It
is important to be able to identify what is needed
in which case, and what needs to be done in pre-
senting the example. A categorization of examples,
along with their associated presentation requirements
would help tremendously. This issue is particularly
salient in the design of a computational framework for
the generation of tutorial descriptions which include
examples. Previous work on characterizing exam-
ples has approached the issue from the direction of
when different types of examples should be provided,
rather than what characterizes the different types. In
this paper, w e extend previous work on example char-
acterization in two ways: (i) we show that the scope
of the characterization must be extended to include
not just the example, but also the surrounding con-
text, and (ii) w e characterize examples in terms of
three orthogonal dimensions: the information con-
lent, the intended audience, and the knowledge type.
We present descriptions from text-books on USP to
illustrate our points, and describe h o w such catego-
rizations can be effectively used by a computational
system to generate descriptions that incorporate ex-
amples
Ultra-Summarization: A Statistical Approach to Generating Highly Condensed Non-Extractive Summaries
Using current extractive summarization techniques, it is impossible to produce a coherent document summary shorter than a single sentence, or to produce a summary that conforms to particular stylistic constraints. Ideally, one would prefer to understand the document, and to generate an appropriate summary directly from the results of that understanding. Absent a comprehensive natural language understanding system, an approximation must be used. This paper presents an alternative statistical model of a summarization process, which jointly applies statistical models of the term selection and term ordering process to produce brief coherent summaries in a style learned from a training corpus. 1 Introduction Summarization is one of the most important capabilities required in writing. Effective summarization, like effective writing, is neither easy nor innate; rather, it is a skill that is developed through instruction and practice [Hidi and Anderson, 1986; Hooper et al., 1994] . Generating..
Dynamic Generation of Follow up Question Menus: Facilitating Interactive Natural Language Dialogues
Most complex systems provide some form of help facilities. However, typically, such help facilities do not allow users to ask follow up questions or request further elaborations when they are not satisfied with the systems' initial offering. One approach to alleviating this problem is to present the user with a menu of possible follow up questions at every point. Limiting follow up information requests to choices in a menu has many advantages, but there are also a number of issues that must be dealt with in designing such a system. To dynamically generate useful embedded menus, the system must be able to, among other things, determine the context of the request, represent and reason about the explanations presented to the user, and limit the number of choices presented in the menu. This paper discusses such issues in the context of a patient education system that generates a natural language description in which the text is directly manipulable -- clicking on portionsof the text causes the system to generate menus that can be used to request elaborations and further information