27 research outputs found
Exploring the use of mixed precision in NEMO
It has been a widely extended practice in scientific computing to use 64-bit to represent data without even considering which level of precision is really needed. In many applications, 32-bit should provide enough accuracy, and in other cases 64-bit is not enough. In climate science, the inherent difficulties collecting data imply a considerable level of uncertainty, which suggest that the general use of 64-bit to represent the data may be a waste of resources, while on the other hand, some specific algorithms could benefit from an increment of the precision used. These factors suggest that in the future more attention has to be paid to the precision used in scientific software, to use the resources wisely and also avoid losing accuracy. In this work we question whether the precision used in the oceanic model NEMO is necessary and sufficient, and the potential benefits of adjusting this precision
Efficient implementation of symplectic implicit Runge-Kutta schemes with simplified Newton iterations
We are concerned with the efficient implementation of symplectic implicit
Runge-Kutta (IRK) methods applied to systems of (non-necessarily Hamiltonian)
ordinary differential equations by means of Newton-like iterations. We pay
particular attention to symmetric symplectic IRK schemes (such as collocation
methods with Gaussian nodes). For a -stage IRK scheme used to integrate a
-dimensional system of ordinary differential equations, the application of
simplified versions of Newton iterations requires solving at each step several
linear systems (one per iteration) with the same real
coefficient matrix. We propose rewriting such -dimensional linear systems
as an equivalent -dimensional systems that can be solved by performing
the LU decompositions of real matrices of size . We
present a C implementation (based on Newton-like iterations) of Runge-Kutta
collocation methods with Gaussian nodes that make use of such a rewriting of
the linear system and that takes special care in reducing the effect of
round-off errors. We report some numerical experiments that demonstrate the
reduced round-off error propagation of our implementation
Reproducibility, accuracy and performance of the Feltor code and library on parallel computer architectures
Feltor is a modular and free scientific software package. It allows
developing platform independent code that runs on a variety of parallel
computer architectures ranging from laptop CPUs to multi-GPU distributed memory
systems. Feltor consists of both a numerical library and a collection of
application codes built on top of the library. Its main target are two- and
three-dimensional drift- and gyro-fluid simulations with discontinuous Galerkin
methods as the main numerical discretization technique. We observe that
numerical simulations of a recently developed gyro-fluid model produce
non-deterministic results in parallel computations. First, we show how we
restore accuracy and bitwise reproducibility algorithmically and
programmatically. In particular, we adopt an implementation of the exactly
rounded dot product based on long accumulators, which avoids accuracy losses
especially in parallel applications. However, reproducibility and accuracy
alone fail to indicate correct simulation behaviour. In fact, in the physical
model slightly different initial conditions lead to vastly different end
states. This behaviour translates to its numerical representation. Pointwise
convergence, even in principle, becomes impossible for long simulation times.
In a second part, we explore important performance tuning considerations. We
identify latency and memory bandwidth as the main performance indicators of our
routines. Based on these, we propose a parallel performance model that predicts
the execution time of algorithms implemented in Feltor and test our model on a
selection of parallel hardware architectures. We are able to predict the
execution time with a relative error of less than 25% for problem sizes between
0.1 and 1000 MB. Finally, we find that the product of latency and bandwidth
gives a minimum array size per compute node to achieve a scaling efficiency
above 50% (both strong and weak)