39 research outputs found
What to Read: A Biased Guide to AI Literacy for the Beginner
Acknowledgements. It was Ken Forbus' idea, and he, Howie Shrobe, Dan Weld, and John Batali read various drafts. Dan Huttenlocher and Tom Knight helped with the speech recognition section. The science fiction section was prepared with the aid of my SF/AI editorial board, consisting of Carl Feynman and David Wallace, and of the ArpaNet SF-Lovers community. Even so, all responsibility rests with me.This note tries to provide a quick guide to AI literacy for the beginning AI hacker and for the experienced AI hacker or two whose scholarship isn't what it should be. most will recognize it as the same old list of classic papers, give or take a few that I feel to be under- or over-rated. It is not guaranteed to be thorough or balanced or anything like that.MIT Artificial Intelligence Laborator
Anaphora and Discourse Structure
We argue in this paper that many common adverbial phrases generally taken to
signal a discourse relation between syntactically connected units within
discourse structure, instead work anaphorically to contribute relational
meaning, with only indirect dependence on discourse structure. This allows a
simpler discourse structure to provide scaffolding for compositional semantics,
and reveals multiple ways in which the relational meaning conveyed by adverbial
connectives can interact with that associated with discourse structure. We
conclude by sketching out a lexicalised grammar for discourse that facilitates
discourse interpretation as a product of compositional rules, anaphor
resolution and inference.Comment: 45 pages, 17 figures. Revised resubmission to Computational
Linguistic
Recommended from our members
Inference processing and error recovery in sentence understanding
Solving the mysteries of human language understanding inevitably requires an answer to the question of how the language understander resolves ambiguity, for human language is certainly ambiguous. But ambiguity leads to choices between possible explanations, and choice opens the door for mistakes. Unless we are willing to believe that the human language understander always makes the correct choice, any explanation of ambiguity resolution must be considered incomplete if it does not also account for recovery from an incorrect decision.This dissertation describes a new approach to lexical ambiguity resolution during sentence understanding which is implemented in a program called ATLAST. Many computational models of natural language understanding have dealt with lexical ambiguity resolution, but ATLAST is one of the few models to address the associated problem of error recovery. ATLAST's ability to recover from an incorrect lexical inference decision stems from its ability to retain unchosen word meanings for a period of time after it selects the apparently context-appropriate meaning of an ambiguous word. The short-term retention of possible lexical inferences permits ATLAST to recover from incorrect decisions without backtracking and reprocessing text, and without keeping a record of possible choices indefinitely.The principle of retention provides a solution to the problem of error recovery which is compatible with current psycholinguistic theories of lexical disambiguation. Furthermore, the existence of some form of retention in lexical disambiguation is supported by the results of experiments with human subjects. This dissertation includes a discussion of these results and speculation on how the principle of retention might be extended to account for recovery from erroneous higher-level inference decisions
An investigation into the theoretical foundations of library cataloguing and a critical analysis of the cataloguing of the South African national bibliography, 1981-1983
Includes bibliographical references.This thesis proposes that the foundations of the library catalogue are not rooted in a coherent, encompassing and comprehensive theoretical structure. Instead, it shows that it rests upon a number of principles that evolved during the nineteenth century from the work done by cataloguing experts such as Panizzi, Jewett and Cutter. These principles are shown to be either principles of access or of bibliographical description, and they still form the basis for the construction of modern catalogues according to the Anglo-American Cataloguing Rules, 2nd edition (AACR2). The South African National Bibliography (SANB) is then used as an example of an actual catalogue constructed according to the AACR2. A study is conducted of the cataloguing records in the SANB in order to establish how these Rules are put into practice, and how usable a catalogue may be produced according to these Rules and principles. It is concluded that the SANB is a high quality catalogue according to the standards set by the AACR2, but that such a catalogue may not be optimally useful from the point of view of the user. Certain ideas from Artificial Intelligence are then employed to find out to what extent a user is able to utilize the library catalogue as a channel of communication in order to gain maximum benefit from the information available in the catalogue. It is found that the user is indeed not equipped to make full use of the catalogue, and it is suggested that the potential for increased access facilities brought (v) about by computer technology may be employed to bridge the communication gap between the user and the cataloguer. The thesis therefore concludes that the established principles according to which catalogues are constructed, are inadequate for the formulation of a comprehensive theory of cataloguing, but a search for such a theory is shown to be ultimately inappropriate. Cataloguing is essentially a problem-solving pursuit which aims at the production of a tangible object; a usable catalogue. Modern computer technology has brought the library catalogue to a crossroads in its development, and a detailed study of user needs will have to form the basis for the development of additional principles according to which the new technology will most successfully be applied to library catalogues
Anaphora and Discourse Semantics
We argue in this paper that many common adverbial phrases generally taken to be discourse connectives signalling discourse relations between adjacent discourse units are instead anaphors. We do this by (i) demonstrating their behavioral similarity with more common anaphors (pronouns and definite NPs); (ii) presenting a general framework for understanding anaphora into which they nicely fit; (iii) showing the interpretational benefits of understanding discourse adverbials as anaphors; and (iv) sketching out a lexicalised grammar that facilitates discourse interpretation as a product of compositional rules, anaphor resolution and inference
How sketches work: a cognitive theory for improved system design
Evidence is presented that in the early stages of design or composition the
mental processes used by artists for visual invention require a different type of
support from those used for visualising a nearly complete object. Most research
into machine visualisation has as its goal the production of realistic images which
simulate the light pattern presented to the retina by real objects. In contrast sketch
attributes preserve the results of cognitive processing which can be used
interactively to amplify visual thought. The traditional attributes of sketches
include many types of indeterminacy which may reflect the artist's need to be
"vague".
Drawing on contemporary theories of visual cognition and neuroscience this
study discusses in detail the evidence for the following functions which are better
served by rough sketches than by the very realistic imagery favoured in machine
visualising systems.
1. Sketches are intermediate representational types which facilitate the
mental translation between descriptive and depictive modes of representing visual
thought.
2. Sketch attributes exploit automatic processes of perceptual retrieval and
object recognition to improve the availability of tacit knowledge for visual
invention.
3. Sketches are percept-image hybrids. The incomplete physical attributes
of sketches elicit and stabilise a stream of super-imposed mental images which
amplify inventive thought.
4. By segregating and isolating meaningful components of visual
experience, sketches may assist the user to attend selectively to a limited part of a
visual task, freeing otherwise over-loaded cognitive resources for visual thought.
5. Sequences of sketches and sketching acts support the short term episodic
memory for cognitive actions. This assists creativity, providing voluntary control
over highly practised mental processes which can otherwise become stereotyped.
An attempt is made to unite the five hypothetical functions. Drawing on the
Baddeley and Hitch model of working memory, it is speculated that the five
functions may be related to a limited capacity monitoring mechanism which makes
tacit visual knowledge explicitly available for conscious control and manipulation.
It is suggested that the resources available to the human brain for imagining nonexistent
objects are a cultural adaptation of visual mechanisms which evolved in
early hominids for responding to confusing or incomplete stimuli from immediately
present objects and events. Sketches are cultural inventions which artificially
mimic aspects of such stimuli in order to capture these shared resources for the
different purpose of imagining objects which do not yet exist.
Finally the implications of the theory for the design of improved machine
systems is discussed. The untidy attributes of traditional sketches are revealed to
include cultural inventions which serve subtle cognitive functions. However
traditional media have many short-comings which it should be possible to correct
with new technology. Existing machine systems for sketching tend to imitate nonselectively
the media bound properties of sketches without regard to the functions
they serve. This may prove to be a mistake. It is concluded that new system
designs are needed in which meaningfully structured data and specialised imagery
amplify without interference or replacement the impressive but limited creative
resources of the visual brain
Recommended from our members
Planning multisentential English text using communicative acts
The goal of this research is to develop explanation presentation mechanisms for knowledge based
systems which enable them to define domain terminology and concepts, narrate events, elucidate plans,
processes, or propositions and argue to support a claim or advocate action. This requires the development
of devices which select, structure, order and then linguistically realize explanation content as coherent and
cohesive English text.
With the goal of identifying generic explanation presentation strategies, a wide range of naturally
occurring texts were analyzed with respect to their communicative sttucture, function, content and intended
effects on the reader. This motivated an integrated theory of communicative acts which characterizes text at
the level of rhetorical acts (e.g., describe, define, narrate), illocutionary acts (e.g., inform, request), and
locutionary acts (e.g., ask, command). Taken as a whole, the identified communicative acts characterize
the structure, content and intended effects of four types of text: description, narration, exposition,
argument. These text types have distinct effects such as getting the reader to know about entities, to know
about events, to understand plans, processes, or propositions, or to believe propositions or want to
perform actions. In addition to identifying the communicative function and effect of text at multiple levels
of abstraction, this dissertation details a tripartite theory of focus of attention (discourse focus, temporal
focus, and spatial focus) which constrains the planning and linguistic realization of text.
To test the integrated theory of communicative acts and tripartite theory of focus of attention, a text
generation system TEXPLAN (Textual EXplanation PLANner) was implemented that plans and
linguistically realizes multisentential and multiparagraph explanations from knowledge based systems. The
communicative acts identified during text analysis were formalized as over sixty compositional and (in
some cases) recursive plan operators in the library of a hierarchical planner. Discourse, temporal, and
spatial focus models were implemented to track and use attentional information to guide the organization
and realization of text. Because the plan operators distinguish between the communicative function (e.g.,
argue for a proposition) and the expected effect (e.g., the reader believes the proposition) of communicative
acts, the system is able to construct a discourse model of the structure and function of its textual responses
as well as a user model of the expected effects of its responses on the reader's knowledge, beliefs, and
desires. The system uses both the discourse model and user model to guide subsequent utterances. To test
its generality, the system was interfaced to a variety of domain applications including a neuropsychological
diagnosis system, a mission planning system, and a knowledge based mission simulator. The system
produces descriptions, narrations, expositions, and arguments from these applications, thus exhibiting a
broader range of rhetorical coverage than previous text generation systems
The Computer Revolution in Philosophy: Philosophy, Science and Models of Mind
The book presented deep important connections between Artificial Intelligence and Philosophy, based partly on an argument that both philosophy and science are primarily concerned with identifying and explaining possibilities contrary to a common view that science is primarily concerned with laws. The book attempted to show in principle how the construction, testing and debugging of complex computational models, explaining possibilities in a new way, can illuminate a collection of deep philosophical problems, e.g. about the nature of mind, the nature of representation, the nature of mathematical discovery. However it did not claim that this could be done easily or that the problems would be solved soon. 40 years later many of them have still not been solved, including explaining how biological brains made possible the deep mathematical discoveries made millennia ago, long before the development of modern logic, often wrongly assumed to provide foundations for all of mathematics. Later work on these ideas includes the author's Meta-Morphogenesis project, inspired by Alan Turing's work: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
Originally published in 1978 by Harvester Press, this went out of print. An electronic version was created from a scanned in copy and then extensively edited with corrections, addtional text, and notes, available online as html or pdf, intermittently updated. This pdf version was created on 2019/07/09
Definiteness across languages
Definiteness has been a central topic in theoretical semantics since its modern foundation. However, despite its significance, there has been surprisingly scarce research on its cross-linguistic expression. With the purpose of contributing to filling this gap, the present volume gathers thirteen studies exploiting insights from formal semantics and syntax, typological and language specific studies, and, crucially, semantic fieldwork and cross-linguistic semantics, in order to address the expression and interpretation of definiteness in a diverse group of languages, most of them understudied. The papers presented in this volume aim to establish a dialogue between theory and data in order to answer the following questions: What formal strategies do natural languages employ to encode definiteness? What are the possible meanings associated to this notion across languages? Are there different types of definite reference? Which other functions (besides marking definite reference) are associated with definite descriptions? Each of the papers contained in this volume addresses at least one of these questions and, in doing so, they aim to enrich our understanding of definiteness
Definiteness across languages
Definiteness has been a central topic in theoretical semantics since its modern foundation. However, despite its significance, there has been surprisingly scarce research on its cross-linguistic expression. With the purpose of contributing to filling this gap, the present volume gathers thirteen studies exploiting insights from formal semantics and syntax, typological and language specific studies, and, crucially, semantic fieldwork and cross-linguistic semantics, in order to address the expression and interpretation of definiteness in a diverse group of languages, most of them understudied