246 research outputs found
Recommended from our members
Interest and Predictability: Deciding What to Learn, When to Learn
Inductive learning, which involves largely structural comparisons of examples, and explanation-based learning, a knowledge-intensive method for analyzing examples to build generalized schemas, are two major learning techniques used in AI. In this paper, we show how a combination of the two methods - applying generalization-based techniques during the course of inductive learning - can achieve the power of explanation-based learning without some of the computational problems that arise in domains lacking detailed explanatory rules. We show how the ideas predictability and interest can be particulary valuable in this text
Recommended from our members
Putting Pieces Together: Understanding Patent Abstracts
One aspect of the development of RESEARCHER (Lebowitz 83a), an intelligent information system that reads, remembers and learns from patent abstracts, is the use of strongly semantic-based text understanding methods. We show in this paper how patent abstracts can be processed by using only very simple syntactic rules to identify "pieces" of the ultimate representation and then "putting the pieces together." An example of RESEARCHER processing a sample abstract is shown
Recommended from our members
Ill-Formed Text and Conceptual Processing
In this paper, we discuss the problem of ill-formed (or incorrectly processed) text in the context of conceptual analysis text processing systems. We show that syntactically ill-formed text is not a major problem for such systems. Conceptually ill-formed text and conceptually ill-formed representations of text do cause interesting problems. We define conceptual ill-formedness and then present ideas for how it can handled in the context of two text processing systems, IPP and RESEARCHER
Recommended from our members
Concept Learning in a Rich Input Domain: Generalization-Based Memory
Automatic concept learning from large amounts of complex input data is an interesting and difficult process. In this paper we discuss how the use of a permanent, generalization-based, memory can serve as an important tool in developing programs that learn in rich input domains. The use of Generalization-Based Memory (GBM) allows programs to determine what concepts to learn, as well as definitions of the concepts. We present in this paper a characterization of our research, describe our use of Generalization-Based Memory in two programs under development at Columbia, UNIMEM and RESEARCHER, and describe how they perform concept evaluation and generalization of complex structural descriptions, problems typical of those we are concerned with
Recommended from our members
"Abstract" Understanding: The Relation between Language and Memory
Natural language is primarily a tool of communication. This implies that whatever roles syntax, semantics, pragmatics, and world knowledge play in language understanding, the comprehension process must be driven by the need to understand the text or conversation, where understand means sufficiently relating the new information being conveyed to existing memories in order to remember the information and/or respond. The need for memory-driven text processing becomes especially clear during designed to read large the construction of a computer system numbers of texts and add them to a coherent memory. However, the recognition that information such as syntax and semantics play only a subsidiary role in text processing does not make understanding more difficult; rather, it makes it possible. A current natural language processing project underway at Columbia involves the creation of a computer program, known as RESEARCHER, that will read large numbers of technical abstracts, such as patent abstracts, and builds up a coherent memory based on the information obtained. This memory is then used in turn to help in the understanding process. RESEARCHER will use some of the same understanding principles as did IPP, a program that reads and remembers news stories [Lebowitz 80, Lebowitz 81]. One of the goals of RESEARCHER is to show that memory-based understanding techniques are as applicable to physical descriptions as to descriptions of events. In fact, due to the knowledge-intensive nature of technical descriptions, it is expected that the application of memory will be even more important in driving processing
Recommended from our members
An Experiment in Intelligent Information Systems: RESEARCHER
The development of very powerful intelligent information systems will require the use of many Artificial Intelligence techniques including some derived by studying human understanding methods. RESEARCHER is a prototype intelligent information system that reads, remembers, generalizes from and answers questions about complex technical texts, patent abstracts in particular. In this paper, we discuss three areas of current research involving RESEARCHER -- the generalization of hierarchically structured representations; the use of long-term memory in text processing, specifically in resolving ambiguity; and the tailoring of answers to questions to the level of expertise of different users. All of these areas are crucial for truly powerful information systems. We outline our methods and give examples of RESEARCHER processing various examples
Recommended from our members
Ill-Formed Text and Conceptual Processing
In this paper, we discuss the problem of ill-formed (or incorrectly processed) text in the context of conceptual analysis text processing systems. We show that syntactically ill-formed text is not a major problem for such systems. Conceptually ill-formed text and conceptually ill-formed representations of text do cause interesting problems. We define conceptual ill-formedness and then present ideas for how it can handled in the context of two text processing systems, IPP and RESEARCHER
Recommended from our members
Using Memory in Text Understanding
Text processing undoubtedly takes place at many levels simultaneously. In this paper, we discuss how the access of detailed long-term memory can be used in low-level text processing in the context of a computer system, RESEARCHER, that reads, generalizes, and remembers information form patent abstracts. We show specific points where memory can be applied during text processing, rather than just suggesting general principles. In particular, we focus on how linguistically ambiguous structures can be resolved using memory (and only using memory). A computer example of RESEARCHER applying our memory application principles (with a simulated detailed memory) is presented
Recommended from our members
Representing Complex Events Simply
Complex events can often be treated as single units for purposes of cognitive processing. This paper presents a scheme for representing events that was used in the creation of the program IPP (the Integrated Partial Parser). This scheme, in effect, consists of rules for the creation of a set of primitive-like elements for a given domain. The specific structures needed to represent events in one domain, news stories about international terrorism, are presented
Recommended from our members
Creating Characters in a Story-Telling Universe
Extended story generation, i.e., the creation of continuing serials, presents difficult and interesting problems for Artificial Intelligence. We present here the first phase or the development of a program, UNIVERSE, that will ultimately tell extended stories. In particular, alter describing our overall model of story telling, we present a method for creating universes of characters appropriate for extended story generation. This method concentrates on the need to keep story-telling universes consistent and coherent. We also describe the information that must be maintained for characters and interpersonal relationships, and the use of stereotypical information about people to help motivate trait values. The use of historical events for motivation is also described. Finally, we present an example of a character generated by UNIVERSE
- …