291 research outputs found

    Concurrent object-oriented execution of OPS5 production systems

    Get PDF

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited

    The nature and evaluation of commercial expert system building tools, revision 1

    Get PDF
    This memorandum reviews the factors that constitute an Expert System Building Tool (ESBT) and evaluates current tools in terms of these factors. Evaluation of these tools is based on their structure and their alternative forms of knowledge representation, inference mechanisms and developer end-user interfaces. Next, functional capabilities, such as diagnosis and design, are related to alternative forms of mechanization. The characteristics and capabilities of existing commercial tools are then reviewed in terms of these criteria

    Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems

    Get PDF
    Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ). This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)

    The WorkPlace distributed processing environment

    Get PDF
    Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system

    A knowledge base architecture for distributed knowledge agents

    Get PDF
    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given

    An evaluation of Turbo Prolog with an emphasis on its application to the development of expert systems

    Get PDF
    Turbo Prolog is a recently available, compiled version of the programming language Prolog. Turbo Prolog is designed to provide not only a Prolog compiler, but also a program development environment for the IBM Personal Computer family. An evaluation of Turbo Prolog was made, comparing its features to other versions of Prolog and to the community of languages commonly used in artificial intelligence (AI) research and development. Three programs were employed to determine the execution speed of Turbo Prolog applied to various problems. The results of this evaluation demonstrated that Turbo Prolog can perform much better than many commonly employed AI languages for numerically intensive problems and can equal the speed of development languages such as OPS5+ and CLIPS, running on the IBM PC. Applications for which Turbo Prolog is best suited include those which (1) lend themselves naturally to backward-chaining approaches, (2) require extensive use of mathematics, (3) contain few rules, (4) seek to make use of the window/color graphics capabilities of the IBM PC, and (5) require linkage to programs in other languages to form a complete executable image

    A knowledge-based approach for the extraction of machining features from solid models

    Get PDF
    Computer understanding of machining features such as holes and pockets is essential for bridging the communication gap between Computer Aided Design and Computer Aided Manufacture. This thesis describes a prototype machining feature extraction system that is implemented by integrating the VAX-OPS5 rule-based artificial intelligence environment with the PADL-2 solid modeller. Specification of original stock and finished part geometry within the solid modeller is followed by determination of the nominal surface boundary of the corresponding cavity volume model by means of Boolean subtraction and boundary evaluation. The boundary model of the cavity volume is managed by using winged-edge and frame-based data structures. Machining features are extracted using two methods : (1) automatic feature recognition, and (2) machine learning of features for subsequent recognition. [Continues.

    Modular digital holographic fringe data processing system

    Get PDF
    A software architecture suitable for reducing holographic fringe data into useful engineering data is developed and tested. The results, along with a detailed description of the proposed architecture for a Modular Digital Fringe Analysis System, are presented
    corecore