36,000 research outputs found
ADF95: Tool for automatic differentiation of a FORTRAN code designed for large numbers of independent variables
ADF95 is a tool to automatically calculate numerical first derivatives for
any mathematical expression as a function of user defined independent
variables. Accuracy of derivatives is achieved within machine precision. ADF95
may be applied to any FORTRAN 77/90/95 conforming code and requires minimal
changes by the user. It provides a new derived data type that holds the value
and derivatives and applies forward differencing by overloading all FORTRAN
operators and intrinsic functions. An efficient indexing technique leads to a
reduced memory usage and a substantially increased performance gain over other
available tools with operator overloading. This gain is especially pronounced
for sparse systems with large number of independent variables. A wide class of
numerical simulations, e.g., those employing implicit solvers, can profit from
ADF95.Comment: 24 pages, 2 figures, 4 tables, accepted in Computer Physics
Communication
AD in Fortran, Part 1: Design
We propose extensions to Fortran which integrate forward and reverse
Automatic Differentiation (AD) directly into the programming model.
Irrespective of implementation technology, embedding AD constructs directly
into the language extends the reach and convenience of AD while allowing
abstraction of concepts of interest to scientific-computing practice, such as
root finding, optimization, and finding equilibria of continuous games.
Multiple different subprograms for these tasks can share common interfaces,
regardless of whether and how they use AD internally. A programmer can maximize
a function F by calling a library maximizer, XSTAR=ARGMAX(F,X0), which
internally constructs derivatives of F by AD, without having to learn how to
use any particular AD tool. We illustrate the utility of these extensions by
example: programs become much more concise and closer to traditional
mathematical notation. A companion paper describes how these extensions can be
implemented by a program that generates input to existing Fortran-based AD
tools
The Taming of QCD by Fortran 90
We implement lattice QCD using the Fortran 90 language. We have designed
machine independent modules that define fields (gauge, fermions, scalars,
etc...) and have defined overloaded operators for all possible operations
between fields, matrices and numbers. With these modules it is very simple to
write QCD programs. We have also created a useful compression standard for
storing the lattice configurations, a parallel implementation of the random
generators, an assignment that does not require temporaries, and a machine
independent precision definition. We have tested our program on parallel and
single processor supercomputers obtaining excellent performances.Comment: Talk presented at LATTICE96 (algorithms) 3 pages, no figures, LATEX
file with ESPCRC2 style. More information available at:
http://hep.bu.edu/~leviar/qcdf90.htm
Using Java for distributed computing in the Gaia satellite data processing
In recent years Java has matured to a stable easy-to-use language with the
flexibility of an interpreter (for reflection etc.) but the performance and
type checking of a compiled language. When we started using Java for
astronomical applications around 1999 they were the first of their kind in
astronomy. Now a great deal of astronomy software is written in Java as are
many business applications.
We discuss the current environment and trends concerning the language and
present an actual example of scientific use of Java for high-performance
distributed computing: ESA's mission Gaia. The Gaia scanning satellite will
perform a galactic census of about 1000 million objects in our galaxy. The Gaia
community has chosen to write its processing software in Java. We explore the
manifold reasons for choosing Java for this large science collaboration.
Gaia processing is numerically complex but highly distributable, some parts
being embarrassingly parallel. We describe the Gaia processing architecture and
its realisation in Java. We delve into the astrometric solution which is the
most advanced and most complex part of the processing. The Gaia simulator is
also written in Java and is the most mature code in the system. This has been
successfully running since about 2005 on the supercomputer "Marenostrum" in
Barcelona. We relate experiences of using Java on a large shared machine.
Finally we discuss Java, including some of its problems, for scientific
computing.Comment: Experimental Astronomy, August 201
The impact of fourth generation computers on NASTRAN
The impact of 'fourth generation' computers (STAR 100 or ILLIAC 4) on NASTRAN is considered. The desired characteristics of large programs designed for execution on 4G machines are described
The space science data service: A study of its efficiencies and costs
Factors affecting the overall advantages and disadvantages of a centralized facility for both the data base and processing capability for NASA's Office of Space Science programs are examined in an effort to determine the best approach to data management in the light of the increasing number of data bits collected annually. Selected issues considered relate to software and storage savings, security precautions, and the phase-in plan. Information on the current mode of processing and on the potential impact of changes resulting from a conversion to a space science data base service was obtained from five user groups and is presented as an aid in determining the dollar benefits and advantages of a centralized system
Parallel Implementation of the PHOENIX Generalized Stellar Atmosphere Program
We describe the parallel implementation of our generalized stellar atmosphere
and NLTE radiative transfer computer program PHOENIX. We discuss the parallel
algorithms we have developed for radiative transfer, spectral line opacity, and
NLTE opacity and rate calculations. Our implementation uses a MIMD design based
on a relatively small number of MPI library calls. We report the results of
test calculations on a number of different parallel computers and discuss the
results of scalability tests.Comment: To appear in ApJ, 1997, vol 483. LaTeX, 34 pages, 3 Figures, uses
AASTeX macros and styles natbib.sty, and psfig.st
Vienna FORTRAN: A FORTRAN language extension for distributed memory multiprocessors
Exploiting the performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna FORTRAN is a language extension of FORTRAN which provides the user with a wide range of facilities for such mapping of data structures. However, programs in Vienna FORTRAN are written using global data references. Thus, the user has the advantage of a shared memory programming paradigm while explicitly controlling the placement of data. The basic features of Vienna FORTRAN are presented along with a set of examples illustrating the use of these features
- …