279 research outputs found
Parallel processing and expert systems
Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited
Parallel processing and expert systems
Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited
Massively-parallel marker-passing in semantic networks
AbstractOne approach to using the information available in a semantic network is the use of marker-passing algorithms, which propagate information through the network to determine relationships between objects. One of the primary arguments in favor of these algorithms are their ability to be implemented in parallel. Despite this, most implementations have been serial and have only sometimes gone so far as to simulate parallelism. In this paper the marker-passing approach is presented. An actual parallel implementation which shows that such programs can be written on commercially available massively parallel machines is also presented
DATABASE ACCESS REQUIREMENTS OF KNOWLEDGE-BASED SYSTEMS
Knowledge bases constitute the core of those Artificial Intelligence
programs which have come to be known as Expert Systems. An
examination of the most dominant knowledge representation schemes used
in these systems reveals that a knowledge base can, and possibly
should, be described at several levels using different schemes,
including those traditionally used in operational databases. This
chapter provides evidence that solutions to the organization and
access problem for very large knowledge bases require the employment
of appropriate database management methods, at least for the lowest
level of description -- the facts or data. We identify the database
access requirements of knowledge-based or expert systems and then
present four general architectural strategies for the design of expert
systems that interact with databases, together with specific
recommendations for their suitability in particular situations. An
implementation of the most advanced and ambitious of these strategies
is then discussed in some detail.Information Systems Working Papers Serie
Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems
Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ).
This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)
GraphStep: A System Architecture for Sparse-Graph Algorithms
Many important applications are organized around
long-lived, irregular sparse graphs (e.g., data and knowledge
bases, CAD optimization, numerical problems, simulations). The
graph structures are large, and the applications need regular
access to a large, data-dependent portion of the graph for each
operation (e.g., the algorithm may need to walk the graph, visiting
all nodes, or propagate changes through many nodes in the
graph). On conventional microprocessors, the graph structures
exceed on-chip cache capacities, making main-memory bandwidth
and latency the key performance limiters. To avoid this
“memory wall,” we introduce a concurrent system architecture
for sparse graph algorithms that places graph nodes in small
distributed memories paired with specialized graph processing
nodes interconnected by a lightweight network. This gives us a
scalable way to map these applications so that they can exploit
the high-bandwidth and low-latency capabilities of embedded
memories (e.g., FPGA Block RAMs). On typical spreading activation
queries on the ConceptNet Knowledge Base, a sample
application, this translates into an order of magnitude speedup
per FPGA compared to a state-of-the-art Pentium processor
DAG-based software frameworks for PDEs
pre-printThe task-based approach to software and parallelism is well-known and has been proposed as a potential candidate, named the silver model, for exas-cale software. This approach is not yet widely used in the large-scale multi-core parallel computing of complex systems of partial differential equations. After surveying task-based approaches we investigate how well the Uintah software and an extension named Wasatch fit in the task-based paradigm and how well they perform on large scale parallel computers. The conclusion is that these approaches show great promise for petascale but that considerable algorithmic challenges remain
Integrate Enterprise Systems to our Hyperconnected World: Anything, Anywhere, Anytime through architectural design
The architectures of currently sold Enterprise Systems were developed in a time when the amount of data to be processed was limited. Since then the necessity to capture and process real-time data from multiple sources has surged and needs to be considered in a world where everything must be exchanged and available anywhere, anytime and in any format. Yet the abstinence of novel approaches on the architectures of Enterprise Systems creates a gap between the increasing requirements and existing information systems. In this paper, we suggest a new architectural design approach, which will close the gap between increasing requirements and existing information systems. In order to determine a future-proof architecture, the authors conducted a Delphi survey where technology providers and users were inquired on the business needs and technical requirements. The result of the Delphi survey has been used to create a proposal for a different approach towards ES architectures
- …