305 research outputs found

    Task granularity studies on a many-processor CRAY X-MP

    Full text link
    A hybrid granularity model is proposed for general concurrent solution. It is applied to the triangular factorization of a dense matrix ranging in size from 4 to 1024. Concurrency is achieved at two levels: (1) with small (micro) task granularity and (2) with large (blocked) task granularity. Relevance to a many-processor CRAY X-MP is demonstrated by simulation.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/25642/1/0000194.pd

    Lanczos eigensolution method for high-performance computers

    Get PDF
    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors

    Dynamic Systolization for Developing Multiprocessor Supercomputers

    Get PDF
    A dynamic network approach is introduced for developing reconfigurable, systolic arrays or wavefront processors; This allows one to design very powerful and flexible processors to be used in a general-purpose, reconfigurable, and fault-tolerant, multiprocessor computer system. The concepts of macro-dataflow and multitasking can be integrated to handle variable-resolution granularities in computationally intensive algorithms. A multiprocessor architecture, Remps, is proposed based on these design methodologies. The Remps architecture is generalized from the Cedar, HEP, Cray X- MP, Trac, NYU ultracomputer, S-l, Pumps, Chip, and SAM projects. Our goal is to provide a multiprocessor research model for developing design methodologies, multiprocessing and multitasking supports, dynamic systolic/wavefront array processors, interconnection networks, reconfiguration techniques, and performance analysis tools. These system design and operational techniques should be useful to those who are developing or evaluating multiprocessor supercomputers

    Remote access for NAS: Supercomputing in a university environment

    Get PDF
    The experiment was designed to assist the Numerical Aerodynamic Simulation (NAS) Project Office in the testing and evaluation of long haul communications for remote users. The objectives of this work were to: (1) use foreign workstations to remotely access the NAS system; (2) provide NAS with a link to a large university-based computing facility which can serve as a model for a regional node of the Long-Haul Communications Subsystem (LHCS); and (3) provide a tail circuit to the University of Colorado a Boulder thereby simulating the complete communications path from NAS through a regional node to an end-user

    Probabilistic structural mechanics research for parallel processing computers

    Get PDF
    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical

    Development of a Navier-Stokes algorithm for parallel-processing supercomputers

    Get PDF
    An explicit flow solver, applicable to the hierarchy of model equations ranging from Euler to full Navier-Stokes, is combined with several techniques designed to reduce computational expense. The computational domain consists of local grid refinements embedded in a global coarse mesh, where the locations of these refinements are defined by the physics of the flow. Flow characteristics are also used to determine which set of model equations is appropriate for solution in each region, thereby reducing not only the number of grid points at which the solution must be obtained, but also the computational effort required to get that solution. Acceleration to steady-state is achieved by applying multigrid on each of the subgrids, regardless of the particular model equations being solved. Since each of these components is explicit, advantage can readily be taken of the vector- and parallel-processing capabilities of machines such as the Cray X-MP and Cray-2

    Status of Vectorized Monte Carlo for Particle Transport Analysis

    Full text link
    The conventional particle transport Monte Carlo algorithm is ill suited for modem vector supercomputers because the random nature of the particle transport process in the history based algorithm in hibits construction of vectors. An alterna tive, event-based algorithm is suitable for vectorization and has been used recently to achieve impressive gains in perfor mance on vector supercomputers. This re view describes the event-based algorithm and several variations of it Implementa tions of this algorithm for applications in particle transport are described, and their relative merits are discussed. The imple mentation of Monte Carlo methods on multiple vector parallel processors is con sidered, as is the potential of massively parallel processors for Monte Carlo par ticle transport simulations.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/67177/2/10.1177_109434208700100203.pd

    Computational methods and software systems for dynamics and control of large space structures

    Get PDF
    Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers

    Monte Carlo Photon Transport On Shared Memory and Distributed Memory Parallel Processors

    Full text link
    Parallelized Monte Carlo algorithms for analyzing photon transport in an inertially confined fusion (ICF) plasma are consid ered. Algorithms were developed for shared memory (vector and scalar) and distributed memory (scalar) parallel pro cessors. The shared memory algorithm was implemented on the IBM 3090/400, and timing results are presented for dedi cated runs with two, three, and four pro cessors. Two alternative distributed memory algorithms (replication and dis patching) were implemented on a hyper cube parallel processor (1 through 64 nodes). The replication algorithm yields essentially full efficiency for all cube sizes; with the 64-node configuration, the absolute performance is nearly the same as with the CRAY X-MP The dispatching algorithm also yields efficiencies above 80% in a large simulation for the 64-pro cessor configuration.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/67146/2/10.1177_109434208700100306.pd

    Time-partitioning simulation models for calculation on parallel computers

    Get PDF
    A technique allowing time-staggered solution of partial differential equations is presented in this report. Using this technique, called time-partitioning, simulation execution speedup is proportional to the number of processors used because all processors operate simultaneously, with each updating of the solution grid at a different time point. The technique is limited by neither the number of processors available nor by the dimension of the solution grid. Time-partitioning was used to obtain the flow pattern through a cascade of airfoils, modeled by the Euler partial differential equations. An execution speedup factor of 1.77 was achieved using a two processor Cray X-MP/24 computer
    • …
    corecore