48 research outputs found
Recommended from our members
Plasma simulation and fusion calculation
Particle-in-cell (PIC) models are widely used in fusion studies associated with energy research. They are also used in certain fluid dynamical studies. Parallel computation is relevant to them because (1) PIC models are not amenable to a lot of vectorization - about 50% of the total computation can be vectorized in the average model; (2) the volume of data processed by PIC models typically necessitates use of secondary storage with an attendant requirements for high-speed I/O; and (3) PIC models exist today whose implementation requires a computer 10 to 100 times faster than the Cray-1. This paper discusses parallel formulation of PIC models for master/slave architectures and ring architectures. Because interprocessor communication can be a decisive factor in the overall efficiency of a parallel system, we show how to divide these models into large granules that can be executed in parallel with relatively little need for communication. We also report measurements of speedup obtained from experiments on the UNIVAC 1100/84 and the Denelcor HEP
Recommended from our members
Two parallel formulations of particle-in-cell models
This paper discusses parallel formulation of PIC models for master/slave architectures and ring architectures. Because interprocessor communication can be a decisive factor in the overall efficiency of a parallel system, we show how to divide these models into large granules that can be executed in parallel with relatively little need for communication. We also report measurements of speed-up obtained from experiments on the UNIVASC 1100/84 and the Denelcor HEP
Recommended from our members
Vectorization of algorithms for solving systems of difference equations
Today's fastest computers achieve their highest level of performance when processing vectors. Consequently, considerable effort has been spent in the past decade developing algorithms that can be expressed as operations on vectors. In this paper two types of vector architecture are defined. A discussion is presented on the variation of performance that can occur on a vector processor as a function of algorithm and implementation, the consequences of this variation, and the performance of some basic operators on the two classes of vector architecture. Also discussed is the performance of higher-level operators, including some that should be used with caution. With both types of operators, the implementation of techniques for solving systems of difference equations is discussed. Included are fast Poisson solvers and point, line, and conjugate-gradient techniques. 1 figure
Recommended from our members
Draft remarks for the IFIPS Congress '83 panel on how to obtain high performance for high-speed processors
Systems of a few tightly coupled high performance processors have the potential to provide significant increases in computational capability. Realizing this potential will require development of highly parallel algorithms. These must be combined with suitable programming languages and architectures such that the overall implementation introduces little additional work relative to uniprocessor implementation. Experimental equipment will be a pacing factor in research on asynchronous systems
Recommended from our members
Vectorization of algorithms for solving systems of elliptic difference equations
Today's fastest computers achieve their highest level of performance when processing vectors. Consequently, considerable effort has been spent in the past decade developing algorithms that can be expressed as operations on vectors. In this paper we define two types of vector architecture. We discuss the variation of performance that can occur on a vector processor as a function of algorithm and implementation, the consequences of this variation, and the performance of some basic operators on the two classes of vector architecture. We also discuss the performance of higher level operators, including some that should be used with caution. Using both basic and high level operators, we discuss vector implementation of techniques for solving systems of elliptic difference equations. Included are fast Poisson solvers and point, line, and conjugate gradient techniques
Recommended from our members
Numerical algorithms and software for advanced computers
The utilization of large-scale computers at Los Alamos Scientific Laboratory and why scientists are constantly seeking bigger and faster computers are discussed. The trend toward increased parallelism within the architecture of supercomputers is noted, and how this parallelism is affecting software and algorithms is addressed. On the basis of this trend and characteristics of existing simulation models, some of the areas where future research will be needed are indicated. 5 figures
Recommended from our members
DOE research in utilization of high-performance computers
Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure