20 research outputs found
The finite element machine: An experiment in parallel processing
The finite element machine is a prototype computer designed to support parallel solutions to structural analysis problems. The hardware architecture and support software for the machine, initial solution algorithms and test applications, and preliminary results are described
Solution of partial differential equations on vector and parallel computers
The present status of numerical methods for partial differential equations on vector and parallel computers was reviewed. The relevant aspects of these computers are discussed and a brief review of their development is included, with particular attention paid to those characteristics that influence algorithm selection. Both direct and iterative methods are given for elliptic equations as well as explicit and implicit methods for initial boundary value problems. The intent is to point out attractive methods as well as areas where this class of computer architecture cannot be fully utilized because of either hardware restrictions or the lack of adequate algorithms. Application areas utilizing these computers are briefly discussed
A framework for the data-consistent deployment of urban microsimulations
Microsimulation-based models of urban systems have proven to be powerful tools for prediction and scenario analysis, with a particular yet continuously expanding focus on transportation and land use. They bring along a high level of detail, but they also come at the cost of enormous data needs for their estimation. This article develops a framework for the continuous deployment of a microsimulation-based urban model that integrates existing and emerging data sources
A model for the distributed storage and processing of large arrays
A conceptual model for parallel computations on large arrays is developed. The model provides a set of language concepts appropriate for processing arrays which are generally too large to fit in the primary memories of a multiprocessor system. The semantic model is used to represent arrays on a concurrent architecture in such a way that the performance realities inherent in the distributed storage and processing can be adequately represented. An implementation of the large array concept as an Ada package is also described
The development of a multi-layer architecture for image processing
The extraction of useful information from an image involves a series of operations, which can be functionally divided into low-level, intermediate-level and high- level processing. Because different amounts of computing power may be demanded by each level, a system which can simultaneously carry out operations at different levels is desirable. A multi-layer system which embodies both functional and spatial parallelism is envisioned. This thesis describes the development of a three-layer architecture which is designed to tackle vision problems embodying operations in each processing level. A survey of various multi-layer and multi-processor systems is carried out and a set of guidelines for the design of a multi-layer image processing system is established. The linear array is proposed as a possible basis for multi-layer systems and a significant part of the thesis is concerned with a study of this structure. The CLIP7A system, which is a linear array with 256 processing elements, is examined in depth. The CLIP7A system operates under SIMD control, enhanced by local autonomy. In order to examine the possible benefits of this arrangement, image processing algorithms which exploit the autonomous functions are implemented. Additionally, the structural properties of linear arrays are also studied. Information regarding typical computing requirements in each layer and the communication networks between elements in different layers is obtained by applying the CLIP7A system to solve an integrated vision problem. From the results obtained, a three layer architecture is proposed. The system has 256, 16 and 4 processing elements in the low, intermediate and high level layer respectively. The processing elements will employ a 16-bit microprocessor as the computing unit, which is selected from off-the-shelf components. Communication between elements in consecutive layers is via two different networks, which are designed so that efficient data transfer is achieved. Additionally, the networks enable the system to maintain fault tolerance and to permit expansion in the second and third layers
PISCES: An environment for parallel scientific computation
The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data
Parallel execution of horn claus programs
Imperial Users onl
Automated decision making and problem solving. Volume 2: Conference presentations
Related topics in artificial intelligence, operations research, and control theory are explored. Existing techniques are assessed and trends of development are determined
Recommended from our members
The AND/OR process model for parallel interpretation of logic programs
Current techniques for interpretation of logic programs involve a sequential search of a global tree of procedure invocations. This dissertation introduces the AND/OR Process Model, a method for interpretation by a system of asychronous, independent processes that communicate only by messages. The method makes it possible to exploit two distinct forms of parallelism. OR parallelism is obtained from evaluating nondeterministic choices in parallel. AND parallelism arises in the execution of deterministic fuctions, such as matrix multiplication of divide and conquer algorithms, that are inherently parallel. The two forms of parallelism can be exploited at the same time. This means AND parallelism can be applied to clauses that are composed of several nondeterministic components, and it can recover from incorrect choices in the solution of these components. In addition to defining parallel computations, the model provides a more defined procedural semantics for logic programs; that is, parallel interpreters based on this model are able to generate answers to queries that cause standard interpreters to go into an infinite loop. The interpretation method is intended to form the theoretical framework of a highly parallel non von Neumann computer architecture; the dissertation concludes with a discussion of issues involved in implementing the abstract interpreter on a multiprocessor