5 research outputs found
Grid-enabling problem solving environments: a case study of SCIRun and NetSolve
Journal ArticleCombining the functionality of NetSolve, a grid-based middleware solution, with SCIRun, a graphically-based problem solving environment (PSE), yields a platform for creating and executing grid-enabled applications. Using this integrated system, hardware and/or software resources not previously accessible to a user become available completely behind the scenes. Neither the SCIRun system nor the SCIRun user need to know any details about how these resources are located and utilized. A SCIRun module merely makes an RPC-style call to NetSolve via the NetSolve C language API to invoke a certain routine and to pass its data. Distributed computation and the details of remote communication are completely abstracted away from the SCIRun framework and its end user
Programming Languages for Scientific Computing
Scientific computation is a discipline that combines numerical analysis,
physical understanding, algorithm development, and structured programming.
Several yottacycles per year on the world's largest computers are spent
simulating problems as diverse as weather prediction, the properties of
material composites, the behavior of biomolecules in solution, and the quantum
nature of chemical compounds. This article is intended to review specfic
languages features and their use in computational science. We will review the
strengths and weaknesses of different programming styles, with examples taken
from widely used scientific codes.Comment: 21 page
The Design Of Data-Structure-Neutral Libraries For The Iterative Solution Of Sparse Linear Systems
Over the past few years several proposals have been made for the standardization of sparse matrix storage formats in order to allow for the development of portable matrix libraries for the iterative solution of linear systems. We feel that this is the wrong approach. Rather than define one standard (or a small number of standards) for matrix storage, the community should define an interface (i.e., the calling sequences) for the functions that act on the data. In addition, we cannot ignore the interface to the vector operations since, in many applications, vectors may not be stored as consecutive elements in memory. With the acceptance of shared-memory, distributed-memory, and cluster-memory parallel machines, the flexibility of the distribution of the elements of vectors is also extremely important. Key words. Krylov space methods, software libraries, sparse linear systems AMS(MOS) subject classifications. 65F10, 65F50, 68N05 1 Introduction In the 1970s two extremely successful numer..
The Design of Data-Structure-Neutral Libraries for the Iterative Solution of Sparse Linear Systems
Over the past few years several proposals have been made for the standardization of sparse matrix storage formats in order to allow for the development of portable matrix libraries for the iterative solution of linear systems. We believe that this is the wrong approach. Rather than define one standard (or a small number of standards) for matrix storage, the community should define an interface (i.e., the calling sequences) for the functions that act on the data. In addition, we cannot ignore the interface to the vector operations because, in many applications, vectors may not be stored as consecutive elements in memory. With the acceptance of shared memory, distributed memory, and cluster memory parallel machines, the flexibility of the distribution of the elements of vectors is also extremely important. This issue is ignored in most proposed standards. In this article we demonstrate how such libraries may be written using data encapsulation techniques