7 research outputs found

    Parallel linear equation solvers for finite element computations

    Get PDF
    The overall objective of this research is to develop efficient methods for the solution of linear and nonlinear systems of equations on parallel and supercomputers, and to apply these methods to the solution of problems in structural analysis. Attention has been given so far only to linear equations. The methods considered for the solution of the stiffness equation Kx=f have been Choleski factorization and the conjugate gradient iteration with SSOR and Incomplete Choleski preconditioning. More detail on these methods will be given on subsequent slides. These methods have been used to solve for the static displacements for the mast and panel focus problems in conjunction with the CSM testbed system based on NICE/SPAR

    Evaluation Of Large-Scale Optimization Problems On Vector And Parallel Architectures

    No full text
    . We examine the importance of problem formulation for the solution of large-scale optimization problems on high-performance architectures. We use limited memory variable metric methods to illustrate performance issues. We show that the performance of these algorithms is drastically affected by application implementation. Model applications are drawn from the MINPACK-2 test problem collection, with numerical results from a super-scalar architecture (IBM RS6000/370), a vector architecture (CRAY-2), and a massively parallel architecture (Intel DELTA). Key words. optimization, large-scale, limited memory, variable metric, performance evaluation, vector architecture, parallel architecture. AMS subject classifications. 65Y05, 65Y20, 65K05, 65K10, 90C06, 90C30 1. Introduction. Our aim is to explore performance issues associated with the solution of large-scale optimization problems on high-performance architectures. The solution of these problems, where the number of variables ranges betwe..

    The Minpack-2 Test Problem Collection

    No full text
    The Army High Performance Computing Research Center at the University of Minnesota and the Mathematics and Computer Science Division at Argonne National Laboratory are collaborating on the development of the software package MINPACK-2. As part of the MINPACK-2 project we are developing a collection of significant optimization problems to serve as test problems for the package. This report describes the problems in the preliminary version of this collection. 1 Introduction The Army High Performance Computing Research Center at the University of Minnesota and the Mathematics and Computer Science Division at Argonne National Laboratory have initiated a collaboration for the development of the software package MINPACK-2. As part of the MINPACK-2 project, we are developing a collection of significant optimization problems to serve as test problems for the package. This report describes some of the problems in the preliminary version of this collection. Optimization software has often bee..

    Computing Large Sparse Jacobian Matrices Using Automatic Differentiation

    No full text
    The computation of large sparse Jacobian matrices is required in many important large-scale scientific problems. We consider three approaches to computing such matrices: hand-coding, difference approximations, and automatic differentiation using the ADIFOR (Automatic Differentiation in Fortran) tool. We compare the numerical reliability and computational efficiency of these approaches on applications from the MINPACK-2 test problem collection. Our conclusion is that automatic differentiation is the method of choice, leading to results that are as accurate as hand-coded derivatives, while at the same time outperforming difference approximations in both accuracy and speed. COMPUTING LARGE SPARSE JACOBIAN MATRICES USING AUTOMATIC DIFFERENTIATION Brett M. Averick , Jorge J. Mor'e, Christian H. Bischof, Alan Carle , and Andreas Griewank 1 Introduction The solution of large-scale nonlinear problems often requires the computation of the Jacobian matrix f 0 (x) of a mapping f : IR ..
    corecore