7,748 research outputs found
Solving the shallow water equations on the Cray X-MP/48 and the connection machine 2
The shallow water equations in Cartesian coordinates and 2-D are solved on the Connection Machine 2 (CM-2) using both the spectral and finite difference methods. A description of these implementations is presented together with a brief discussion of the CM-2 as it relates to these specific computations. The finite difference code was written both in C* and *LISP and the spectral code was written in *LISP. The performance of the codes is compared with a FORTRAN version that was optimized for the Cray X-MP/48
Benchmarking and performance analysis of the CM-2
A suite of benchmarking routines testing communication, basic arithmetic operations, and selected kernel algorithms written in LISP and PARIS was developed for the CM-2. Experiment runs are automated via a software framework that sequences individual tests, allowing for unattended overnight operation. Multiple measurements are made and treated statistically to generate well-characterized results from the noisy values given by cm:time. The results obtained provide a comparison with similar, but less extensive, testing done on a CM-1. Tests were chosen to aid the algorithmist in constructing fast, efficient, and correct code on the CM-2, as well as gain insight into what performance criteria are needed when evaluating parallel processing machines
Spectral solution of the incompressible Navier-Stokes equations on the Connection Machine 2
The issue of solving the time-dependent incompressible Navier-Stokes equations on the Connection Machine 2 is addressed, for the problem of transition to turbulence of the steady flow in a channel. The spectral algorithm used serially requires O(N(4)) operations when solving the equations on an N x N x N grid; using the massive parallelism of the CM, it becomes an O(N(2)) problem. Preliminary timings of the code, written in LISP, are included and compared with a corresponding code optimized for the Cray-2 for a 128 x 128 x 101 grid
Compiling knowledge-based systems from KEE to Ada
The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development
Parallel processing and expert systems
Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited
Krylov methods preconditioned with incompletely factored matrices on the CM-2
The performance is measured of the components of the key interative kernel of a preconditioned Krylov space interative linear system solver. In some sense, these numbers can be regarded as best case timings for these kernels. Sweeps were timed over meshes, sparse triangular solves, and inner products on a large 3-D model problem over a cube shaped domain discretized with a seven point template. The performance of the CM-2 is highly dependent on the use of very specialized programs. These programs mapped a regular problem domain onto the processor topology in a careful manner and used the optimized local NEWS communications network. The rather dramatic deterioration in performance was documented when these ideal conditions no longer apply. A synthetic workload generator was developed to produce and solve a parameterized family of increasingly irregular problems
A software toolbox for robotics
A system of programs was developed to simulate the concurrent command/response interaction between a parallel jaw end effector and the LISP program controlling it. The overall structure of the simulation system was described in a paper submitted to the IEEE Conference on Automation and Robotics. A user's guide for the system was written. A line numbering program on the VAX (Pascal), a program for aiding in file transfer from the VAX to an LSI11 over the RTNET (FORTRAN), and a file scanning program (a crude SCAN) for the LSI11 (FORTRAN) were also developed
Interlingual Lexical Organisation for Multilingual Lexical Databases in NADIA
We propose a lexical organisation for multilingual lexical databases (MLDB).
This organisation is based on acceptions (word-senses). We detail this lexical
organisation and show a mock-up built to experiment with it. We also present
our current work in defining and prototyping a specialised system for the
management of acception-based MLDB. Keywords: multilingual lexical database,
acception, linguistic structure.Comment: 5 pages, Macintosh Postscript, published in COLING-94, pp. 278-28
- …