16 research outputs found
A Note on Ontology and Ordinary Language
We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it. Assuming such a structure we show that the semantics of various natural language phenomena may become nearly trivial
Some Ontological Principles for Designing Upper Level Lexical Resources
The purpose of this paper is to explore some semantic problems related to the
use of linguistic ontologies in information systems, and to suggest some
organizing principles aimed to solve such problems. The taxonomic structure of
current ontologies is unfortunately quite complicated and hard to understand,
especially for what concerns the upper levels. I will focus here on the problem
of ISA overloading, which I believe is the main responsible of these
difficulties. To this purpose, I will carefully analyze the ontological nature
of the categories used in current upper-level structures, considering the
necessity of splitting them according to more subtle distinctions or the
opportunity of excluding them because of their limited organizational role.Comment: 8 pages - gzipped postscript file - A4 forma
Informaticology: combining Computer Science, Data Science, and Fiction Science
Motivated by an intention to remedy current complications with Dutch
terminology concerning informatics, the term informaticology is positioned to
denote an academic counterpart of informatics where informatics is conceived of
as a container for a coherent family of practical disciplines ranging from
computer engineering and software engineering to network technology, data
center management, information technology, and information management in a
broad sense.
Informaticology escapes from the limitations of instrumental objectives and
the perspective of usage that both restrict the scope of informatics. That is
achieved by including fiction science in informaticology and by ranking fiction
science on equal terms with computer science and data science, and framing (the
study of) game design, evelopment, assessment and distribution, ranging from
serious gaming to entertainment gaming, as a chapter of fiction science. A
suggestion for the scope of fiction science is specified in some detail.
In order to illustrate the coherence of informaticology thus conceived, a
potential application of fiction to the ontology of instruction sequences and
to software quality assessment is sketched, thereby highlighting a possible
role of fiction (science) within informaticology but outside gaming
A Proposal to Develop Interactive Classification Technology
Research for the first year was oriented towards: 1) the design of an interactive classification tool (ICT); and 2) the development of an appropriate theory of inference for use in ICT technology. The general objective was to develop a theory of classification that could accommodate a diverse array of objects, including events and their constituent objects. Throughout this report, the term "object" is to be interpreted in a broad sense to cover any kind of object, including living beings, non-living physical things, events, even ideas and concepts. The idea was to produce a theory that could serve as the uniting fabric of a base technology capable of being implemented in a variety of automated systems. The decision was made to employ two technologies under development by the principal investigator, namely, SMS (Symbolic Manipulation System) and SL (Symbolic Language) [see debessonet, 1991, for detailed descriptions of SMS and SL]. The plan was to enhance and modify these technologies for use in an ICT environment. As a means of giving focus and direction to the proposed research, the investigators decided to design an interactive, classificatory tool for use in building accessible knowledge bases for selected domains. Accordingly, the proposed research was divisible into tasks that included: 1) the design of technology for classifying domain objects and for building knowledge bases from the results automatically; 2) the development of a scheme of inference capable of drawing upon previously processed classificatory schemes and knowledge bases; and 3) the design of a query/ search module for accessing the knowledge bases built by the inclusive system. The interactive tool for classifying domain objects was to be designed initially for textual corpora with a view to having the technology eventually be used in robots to build sentential knowledge bases that would be supported by inference engines specially designed for the natural or man-made environments in which the robots would be called upon to operate
Meinongian Semantics and Artificial Intelligence
This essay describes computational semantic networks for a philosophical audience and surveys several approaches to semantic-network semantics. In particular, propositional semantic networks (exemplified by SNePS) are discussed; it is argued that only a fully intensional, Meinongian semantics is appropriate for them; and several Meinongian systems are presented
Recommended from our members
Automating Abell's theory of comparative narratives
The purpose of this thesis is to demonstrate the progress that has been made towards the goal of producing a prototype computer model of Abell's Theory of Comparative Narratives, and subsequently, designing metrics to rigorously measure Abell's concept of 'closeness' of texts.
The production of such a model does not simply involve the mechanical (though distinctly non-trivial) transference of Abell's theory from paper to machine; various facets of the theory are not of a sufficiently high specification for a computer model and the fulfilment of such a computer model requires attention to these areas, specifically:
i) a repeatable method of comparing the structures of individual events;
ii) a consistent procedure of comparing the overall structure of a pair of texts, following on from Abell's basic concept of paths of social determination.
iii) metrics to demonstrate that the solutions proposed do indeed address the shortcomings of Abell's theory.
In order to preserve the qualitative nature of the theory and to demonstrate its potential real-world uses, the computer model attempts to avoid complex mathematics as far as possible and to produce transparent, non-expert results
Defaults Denied
We take a tour of various themes in default reasoning, examining new ideas
as well as those of Brachman, Delgrande, Poole, and Schlechta. An
underlying issue is that of stating that a potential default principle is
not appropriate. We see this arise most dramatically as a problem in an
attempt to formalize what are often loosely called "prototypes", although
it also arises in other formal approaches to default reasoning. Some
formalisms in the literature provide solutions but not without costs. We
propose a formalism that appears to avoid these costs; it can be seen as a
step toward a population-based set-theoretic modification of these
approaches, that may ultimately provide a closer tie to recent work on
statistical (quantitative) foundations of (qualitative) defaults([1]).
Our analysis in particular indicates the need to resolve a conflation
between use and mention in many default formalisms. Our treatment proposes
such a resolution, and also explores the use of sets toward a more
population-based notion of default.
(Also cross-referenced as UMIACS-TR-96-61