4,337 research outputs found
Ideas About Management of LISP Data Bases
Work reported herein was conducted partly at Uppsala University, Swden, with support from the Swedish Board of Technical Development, and partly at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.The trend toward larger data bases in A.I. programs makes it desirable to provide program support for the activity of building and maintaining LISP data bases. Many techniques can be drawn from present and proposed systems for supporting program maintenance, but there are also a variety of additional problems and possibilities. Most importantly, a system for supporting data base development needs a formal description of the user's data base. The description must at least partly be contributed by the user. The paper discusses the operation of such a support system, and describes some ideas that have been useful in a prototype system.MIT Artificial Intelligence Laborator
Recommended from our members
Using agent based simulation to empirically examine complexity in carbon footprint business process
Through the critical analysis of the extant literature, it is observed that Simulation is widely used as a research method in Natural Sciences, Engineering and Social Sciences, in addition to argumentation and formalisation as the third way of carrying out research. Simulation is not so widely used in Business and Management research as it ought to have been, though this is changing for the better with the technological advances in computers and their computational power. These technological advances enhance the capability of theoretical research models, in defining a problem and their use in empirically examining a solution to the problem in simulated reality, like never before. Management journal searches for “Simulation and Complexity Theory” returned nil or zero returns, which explain that this combination is not popular in management research, though they are used individually more often. The major objective of this paper is to analyse some of the conceptual (or theoretical) and methodological (or empirical) contributions that Agent Based Simulation and Complexity Theory can make to the business and management community in their business process related research In view of this, some basic ideas are discussed of using Agent Based Simulation as a method in Business and Management Studies research and how an Agent Based Model can be applied to a business process as complex as Carbon Footprint. It is in this context that the use of Complexity as the base theory to empirically examine a business process is discussed. Throughout this article, our research on complex adaptive systems (e.g., Accounting Information System) in continuously changing organisations managing complex business processes (e.g., Carbon Footprint business process) is considered as the basis for illustrating some of the concepts. Through this article, avenues for further management research using these tools and methodology are suggested
Some Notes on the Past and Future of Lisp-Stat
Lisp-Stat was originally developed as a framework for experimenting with dynamic graphics in statistics. To support this use, it evolved into a platform for more general statistical computing. The choice of the Lisp language as the basis of the system was in part coincidence and in part a very deliberate decision. This paper describes the background behind the choice of Lisp, as well as the advantages and disadvantages of this choice. The paper then discusses some lessons that can be drawn from experience with Lisp-Stat and with the R language to guide future development of Lisp-Stat, R, and similar systems.
Kate's Model Verification Tools
Kennedy Space Center's Knowledge-based Autonomous Test Engineer (KATE) is capable of monitoring electromechanical systems, diagnosing their errors, and even repairing them when they crash. A survey of KATE's developer/modelers revealed that they were already using a sophisticated set of productivity enhancing tools. They did request five more, however, and those make up the body of the information presented here: (1) a transfer function code fitter; (2) a FORTRAN-Lisp translator; (3) three existing structural consistency checkers to aid in syntax checking their modeled device frames; (4) an automated procedure for calibrating knowledge base admittances to protect KATE's hardware mockups from inadvertent hand valve twiddling; and (5) three alternatives for the 'pseudo object', a programming patch that currently apprises KATE's modeling devices of their operational environments
Conference on Automated Decision-Making and Problem Solving, the Third Day: Issues Discussed
A conference held at Langley Research Center in May of 1980 brought together university experts from the fields of Control Theory, Operations Research, and Artificial Intelligence to explore current research in automation from both the perspective of their own particular disciplines and from that of interdisciplinary considerations. Informal discussions from the final day of the those day conference are summarized
Some Notes on the Past and Future of Lisp-Stat
Lisp-Stat was originally developed as a framework for experimenting with dynamic graphics in statistics. To support this use, it evolved into a platform for more general statistical computing. The choice of the Lisp language as the basis of the system was in part coincidence and in part a very deliberate decision. This paper describes the background behind the choice of Lisp, as well as the advantages and disadvantages of this choice. The paper then discusses some lessons that can be drawn from experience with Lisp-Stat and with the R language to guide future development of Lisp-Stat, R, and similar systems
Evaluating Knowledge Representation and Reasoning Capabilites of Ontology Specification Languages
The interchange of ontologies across the World Wide Web (WWW) and the cooperation among heterogeneous agents placed on it is the main reason for the development of a new set of ontology specification languages, based on new web standards such as XML or RDF. These languages (SHOE, XOL, RDF, OIL, etc) aim to represent the knowledge contained in an ontology in a simple and human-readable way, as well as allow for the interchange of ontologies across the web. In this paper, we establish a common framework to compare the expressiveness of "traditional" ontology languages (Ontolingua, OKBC, OCML, FLogic, LOOM) and "web-based" ontology languages. As a result of this study, we conclude that different needs in KR and reasoning may exist in the building of an ontology-based application, and these needs must be evaluated in order to choose the most suitable ontology language(s)
An analysis of the application of AI to the development of intelligent aids for flight crew tasks
This report presents the results of a study aimed at developing a basis for applying artificial intelligence to the flight deck environment of commercial transport aircraft. In particular, the study was comprised of four tasks: (1) analysis of flight crew tasks, (2) survey of the state-of-the-art of relevant artificial intelligence areas, (3) identification of human factors issues relevant to intelligent cockpit aids, and (4) identification of artificial intelligence areas requiring further research
Using C to build a satellite scheduling expert system: Examples from the Explorer Platform planning system
A C-based artificial intelligence (AI) development effort which is based on a software tools approach is discussed with emphasis on reusability and maintainability of code. The discussion starts with simple examples of how list processing can easily be implemented in C and then proceeds to the implementations of frames and objects which use dynamic memory allocation. The implementation of procedures which use depth first search, constraint propagation, context switching, and blackboard-like simulation environment are described. Techniques for managing the complexity of C-based AI software are noted, especially the object-oriented techniques of data encapsulation and incremental development. Finally, all these concepts are put together by describing the components of planning software called the Planning And Resource Reasoning (PARR) Shell. This shell was successfully utilized for scheduling services of the Tracking and Data Relay Satellite System for the Earth Radiation Budget Satellite since May of 1987 and will be used for operations scheduling of the Explorer Platform in Nov. of 1991
Toward an expert project management system
The purpose of the research effort is to prescribe a generic reusable shell that any project office can install and customize for the purposes of advising, guiding, and supporting project managers in that office. The prescribed shell is intended to provide both: a component that generates prescriptive guidance for project planning and monitoring activities, and an analogy (intuition) component that generates descriptive insights of previous experience of successful project managers. The latter component is especially significant in that it has the potential to: retrieve insights, not just data, and provide a vehicle for expert PMs to easily transcribe their current experiences in the course of each new project managed
- …