19 research outputs found
A path-oriented knowledge representation system: Defusing the combinatorial system
LIMAP is a programming system oriented toward efficient information manipulation over fixed finite domains, and quantification over paths and predicates. A generalization of Warshall's Algorithm to precompute paths in a sparse matrix representation of semantic nets is employed to allow questions involving paths between components to be posed and answered easily. LIMAP's ability to cache all paths between two components in a matrix cell proved to be a computational obstacle, however, when the semantic net grew to realistic size. The present paper describes a means of mitigating this combinatorial explosion to an extent that makes the use of the LIMAP representation feasible for problems of significant size. The technique we describe radically reduces the size of the search space in which LIMAP must operate; semantic nets of more than 500 nodes have been attacked successfully. Furthermore, it appears that the procedure described is applicable not only to LIMAP, but to a number of other combinatorially explosive search space problems found in AI as well
Recommended from our members
Dataparallel C : a SIMD programming language for multicomputers
Dataparallel C is a SIMD extension to the standard C programming language. It is derived from the original C* language developed by Thinking Machines Corporation.. We have nearly completed a third-generation Dataparallel C compiler, which transforms Dataparallel C programs into SPMD-style C code suitable for compilation and execution on NCUBE multicomputers. In this paper we elaborate on the characteristics and strengths of dataparallel programming languages. We summarize the syntax and semantics of Dataparallel C, present six benchmark programs, and document the performance of these programs executing on the NCUBE 3200 multicomputer. Our work demonstrates that SIMD source programs can achieve reasonable speedup when compiled and executed on MIMD computers.Key Words : compiler, data parallel, hypercube, MIMD, multicomputer, programming language, SIM
Recommended from our members
Tree Machines: Architectures and Algorithms A Survey Paper
Recent advances in very large scale integrated (VLSI) circuit technology have lead to a surge in research aimed at finding new computer organizations that support a great deal of concurrency computer organizations based on tree structures appear well-suited to several kinds of parallel computations. In this paper we will discuss the performance of tree machines as well as Issues related to their implementation in VLSI. Examples of tree machines are presented, with an emphasis on the way the processing elements communicate in the machine. A taxonomy of tree algorithms based on a taxonomy of parallel algorithms proposed by Kung in 1979 is Introduced. Examples of tree algorithms are also given
Recommended from our members
Data-parallel programming on MIMD computers
We are convinced that the combination of data-parallel languages and MIMD hardware can make an important contribution to high speed computing. The data-parallel paradigm is a natural way to solve a large number of problems arising in science and engineering. Data-parallel programs are easier to design, implement, and debug than programs written in a MIMD language. For their part, the existence of multiple control units on MIMD computers can allow loosely-synchronous programs to execute more efficiently than they would on a SIMD architecture. In this paper we provide empirical evidence to support these assertions. We describe the implementation of two compilers for the data-parallel programming language C*. One compiler generates code for the NCUBE 3200 hypercube multicomputer, the other generates code for the Sequent Balance 21000 multiprocessor. We have compiled and executed a suite of C* programs on these two systems, and we present the execution times and speedups achieved by these programs
Efficient path search in intermodal transportation optimization
As the economies of the world become more interrelated and Supply Chains are globalizing, the need arises to create efficient transportation network. This reality in conjunction with conservation of fuel and environmental friendliness gives rise to the research of Efficient Intermodal Transportation System. In particular, the underutilization of railroads in the United States motivates us to research the development of optimal procedures in the transportation of containers in a rail network. With this thesis we search for a cost, time and capacity effective algorithm for solving transportation problem in a graph of intermodal centers (IMC\u27s). We consider discrete model of the real time dynamic situation when all the arcs of the input graph can be affected by changes in their costs, the transportation means have limited and different container capacities at each IMC, and all the nodes (IMC\u27s) can be visited more than once either by different transport means or at different time. This is more general and real situation than the ones considered in the literature so far. The resulting optimization problem is computational intractable (NP-hard), which creates the necessity to develop, implement and test efficient heuristic optimization techniques. We will use Shortest Path Problem (SPP) as the basis for the development of three heuristics. Because of the nature of the problem and application, shortest path procedures provide a very flexible and computationally efficient technique for our model. We will compare the three heuristics with the optimal solution for small size problems for which we could find optimality. Furthermore, we will demonstrate that one of the heuristics perform very well when the fixed costs of running transportation modes is the dominant aspect of the cost structure
Logic and lattices for a statistics advisor
The work partially reported here concerned the development ot a prototype Expert System for
giving advice about Statistics experiments, called ASA, and an inference engine to support
ASA, called ABASE.This involved discovering what knowledge was necessary for performing the task at a satis¬
factory level of competence, working out how to represent this knowledge in a computer, and
how to process the representations efficiently.Two areas of Statistical knowledge are described in detail: the classification of measure¬
ments and statistical variables, and the structure of elementary statistical experiments. A
knowledge representation system based on lattices is proposed, and it is shown that such
representations are learnable by computer programs, and lend themselves to particularly
efficient implementation.ABASE was influenced by MBASE, the inference engine of MECHO [Bundy et al 79a]. Both
are theorem provers working on typed function-free Horn clauses, with controlled creation of
new entities. Their type systems and proof procedures are radically different, though, and
ABASE is "conversational" while MBASE is not