48 research outputs found

    Validating optimisations for chaotic simulations

    No full text
    It is non-trivial to optimise computations of chaotic systems since slightly perturbed simulations diverge exponentially over time due to the well-known butterfly effect if bit-reproducible results are not achieved. Therefore, two model setups that show the same quality in the representation of a chaotic system will show uncorrelated behaviour if integrated long enough, hence it is challenging to check whether a given optimisation degrades model quality. Most models in computational fluid dynamics show chaotic behaviour. In this paper we focus on models of atmosphere and ocean that are vital for predictions of future weather and climate. Since forecast quality is usually limited by the available computational power, optimisation is highly desirable. We describe a new method for accepting or rejecting an optimised implementation of a reconfigurable design to simulate dynamics of a chaotic system. We apply this method to optimise numerical precision to a minimal level of stencil computations that can be used in an idealised ocean model, and show the performance improvements gained on an FPGA. The proposed method enables precision reduction for the FPGA so that it computes up to 9 times faster with 6 times lower energy consumption than an implementation on the same device with double precision arithmetic, while ensuring the optimised design to have acceptable numerical behaviour

    Chisel: Reliability- and Accuracy-Aware Optimization of Approximate Computational Kernels

    Get PDF
    The accuracy of an approximate computation is the distance between the result that the computation produces and the corresponding fully accurate result. The reliability of the computation is the probability that it will produce an acceptably accurate result. Emerging approximate hardware platforms provide approximate operations that, in return for reduced energy consumption and/or increased performance, exhibit reduced reliability and/or accuracy. We present Chisel, a system for reliability- and accuracy-aware optimization of approximate computational kernels that run on approximate hardware platforms. Given a combined reliability and/or accuracy specification, Chisel automatically selects approximate kernel operations to synthesize an approximate computation that minimizes energy consumption while satisfying its reliability and accuracy specification. We evaluate Chisel on five applications from the image processing, scientific computing, and financial analysis domains. The experimental results show that our implemented optimization algorithm enables Chisel to optimize our set of benchmark kernels to obtain energy savings from 8.7% to 19.8% compared to the fully reliable kernel implementations while preserving important reliability guarantees.National Science Foundation (U.S.) (Grant CCF-1036241)National Science Foundation (U.S.) (Grant CCF-1138967)National Science Foundation (U.S.) (Grant IIS-0835652)United States. Dept. of Energy (Grant DE-SC0008923)United States. Defense Advanced Research Projects Agency (Grant FA8650-11-C-7192)United States. Defense Advanced Research Projects Agency (Grant FA8750-12-2-0110)United States. Defense Advanced Research Projects Agency (Grant FA-8750-14-2-0004

    Atmosphere and Ocean Modeling on Grids of Variable Resolution—A 2D Case Study

    No full text

    Atmosphere and Ocean Modeling on Grids of Variable Resolution—A 2D Case Study

    Get PDF
    Grids of variable resolution are of great interest in Atmosphere and Ocean Modeling as they offer a route to higher local resolution and improved solutions. On the other hand there are changes in grid resolution considered to be problematic because of the errors they create between coarse and fine parts of a grid due to reflection and scattering of waves. On complex multidimensional domains these errors resist theoretical investigation and demand numerical experiments. With a low-order hybrid continuous/discontinuous finite element model of the inviscid and viscous shallow-water equations a numerical study is carried out that investigates the influence of grid refinement on critical features such as wave propagation, turbulent cascades and the representation of geostrophic balance. The refinement technique we use is static h-refinement, where additional grid cells are inserted in regions of interest known a priori. The numerical tests include planar and spherical geometry as well as flows with boundaries and are chosen to address the impact of abrupt changes in resolution or the influence of the shape of the transition zone. For the specific finite element model under investigation, the simulations suggest that grid refinement does not deteriorate geostrophic balance and turbulent cascades and the shape of mesh transition zones appears to be less important than expected. However, our results show that the static local refinement is able to reduce the local error, but not necessarily the global error and convergence properties with resolution are changed. Our relatively simple tests already illustrate that grid refinement has to go along with a simultaneous change of the parametrization schemes

    On the use of programmable hardware and reduced numerical precision in earth-system modeling

    No full text
    Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups

    On the use of scale-dependent precision in Earth System modelling

    No full text
    Increasing the resolution of numerical models has played a large part in improving the accuracy of weather and climate forecasts in recent years. Until now, this has required the use of ever more powerful computers, the energy costs of which are becoming increasingly problematic. It has therefore been proposed that forecasters switch to using more efficient ‘reduced precision’ hardware capable of sacrificing unnecessary numerical precision to save costs. Here, an extended form of the Lorenz ‘96 idealized model atmosphere is used to test whether more accurate forecasts could be produced by lowering numerical precision more at smaller spatial scales in order to increase the model resolution. Both a scale-dependent mixture of single- and half-precision – where numbers are represented with fewer bits of information on smaller spatial scales – and ‘stochastic processors’ – where random ‘bit-flips’ are allowed for small-scale variables – are emulated on conventional hardware. It is found that high-resolution parametrized models with scale-selective reduced precision yield better short-term and climatological forecasts than lower resolution parametrized models with conventional precision for a relatively small increase in computational cost. This suggests that a similar approach in real-world models could lead to more accurate and efficient weather and climate forecasts

    On the use of scale-dependent precision in Earth System modelling

    No full text
    Increasing the resolution of numerical models has played a large part in improving the accuracy of weather and climate forecasts in recent years. Until now, this has required the use of ever more powerful computers, the energy costs of which are becoming increasingly problematic. It has therefore been proposed that forecasters switch to using more efficient ‘reduced precision’ hardware capable of sacrificing unnecessary numerical precision to save costs. Here, an extended form of the Lorenz ‘96 idealized model atmosphere is used to test whether more accurate forecasts could be produced by lowering numerical precision more at smaller spatial scales in order to increase the model resolution. Both a scale-dependent mixture of single- and half-precision – where numbers are represented with fewer bits of information on smaller spatial scales – and ‘stochastic processors’ – where random ‘bit-flips’ are allowed for small-scale variables – are emulated on conventional hardware. It is found that high-resolution parametrized models with scale-selective reduced precision yield better short-term and climatological forecasts than lower resolution parametrized models with conventional precision for a relatively small increase in computational cost. This suggests that a similar approach in real-world models could lead to more accurate and efficient weather and climate forecasts
    corecore