8,032 research outputs found
Short description of mathematical support programs for space experiments in the Interkosmos program
A synopsis of programs of mathematical support designed at the Institute for Cosmic Research of the USSR Academy of Sciences for cosmic experiments being conducted in the Interkosmos Program is presented. A short description of the appropriate algorithm is given
Loo.py: From Fortran to performance via transformation and substitution rules
A large amount of numerically-oriented code is written and is being written
in legacy languages. Much of this code could, in principle, make good use of
data-parallel throughput-oriented computer architectures. Loo.py, a
transformation-based programming system targeted at GPUs and general
data-parallel architectures, provides a mechanism for user-controlled
transformation of array programs. This transformation capability is designed to
not just apply to programs written specifically for Loo.py, but also those
imported from other languages such as Fortran. It eases the trade-off between
achieving high performance, portability, and programmability by allowing the
user to apply a large and growing family of transformations to an input
program. These transformations are expressed in and used from Python and may be
applied from a variety of settings, including a pragma-like manner from other
languages.Comment: ARRAY 2015 - 2nd ACM SIGPLAN International Workshop on Libraries,
Languages and Compilers for Array Programming (ARRAY 2015
Automatic differentiation in machine learning: a survey
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in
machine learning. Automatic differentiation (AD), also called algorithmic
differentiation or simply "autodiff", is a family of techniques similar to but
more general than backpropagation for efficiently and accurately evaluating
derivatives of numeric functions expressed as computer programs. AD is a small
but established field with applications in areas including computational fluid
dynamics, atmospheric sciences, and engineering design optimization. Until very
recently, the fields of machine learning and AD have largely been unaware of
each other and, in some cases, have independently discovered each other's
results. Despite its relevance, general-purpose AD has been missing from the
machine learning toolbox, a situation slowly changing with its ongoing adoption
under the names "dynamic computational graphs" and "differentiable
programming". We survey the intersection of AD and machine learning, cover
applications where AD has direct relevance, and address the main implementation
techniques. By precisely defining the main differentiation techniques and their
interrelationships, we aim to bring clarity to the usage of the terms
"autodiff", "automatic differentiation", and "symbolic differentiation" as
these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure
Translation into any natural language of the error messages generated by any computer program
Since the introduction of the Fortran programming language some 60 years ago,
there has been little progress in making error messages more user-friendly. A
first step in this direction is to translate them into the natural language of
the students. In this paper we propose a simple script for Linux systems which
gives word by word translations of error messages. It works for most
programming languages and for all natural languages. Understanding the error
messages generated by compilers is a major hurdle for students who are learning
programming, particularly for non-native English speakers. Not only may they
never become "fluent" in programming but many give up programming altogether.
Whereas programming is a tool which can be useful in many human activities,
e.g. history, genealogy, astronomy, entomology, in many countries the skill of
programming remains confined to a narrow fringe of professional programmers. In
all societies, besides professional violinists there are also amateurs. It
should be the same for programming. It is our hope that once translated and
explained the error messages will be seen by the students as an aid rather than
as an obstacle and that in this way more students will enjoy learning and
practising programming. They should see it as a funny game.Comment: 14 pages, 1 figur
Distributed memory compiler design for sparse problems
A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented
Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning
Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods
Automatic Computation of Feynman Diagrams
Quantum corrections significantly influence the quantities observed in modern
particle physics. The corresponding theoretical computations are usually quite
lengthy which makes their automation mandatory. This review reports on the
current status of automatic calculation of Feynman diagrams in particle
physics. The most important theoretical techniques are introduced and their
usefulness is demonstrated with the help of simple examples. A survey over
frequently used programs and packages is provided, discussing their abilities
and fields of applications. Subsequently, some powerful packages which have
already been applied to important physical problems are described in more
detail. The review closes with the discussion of a few typical applications for
the automated computation of Feynman diagrams, addressing current physical
questions like properties of the and Higgs boson, four-loop corrections to
renormalization group functions and two-loop electroweak corrections.Comment: Latex, 62 pages. Typos corrected, references updated and some
comments added. Vertical offset changed. The complete paper is also available
via anonymous ftp at ftp://ttpux2.physik.uni-karlsruhe.de/ttp98/ttp98-41/ or
via www at http://www-ttp.physik.uni-karlsruhe.de/Preprints
- …