19 research outputs found

    Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures

    Get PDF
    Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled

    Atmosphere and Ocean Modeling on Grids of Variable Resolution—A 2D Case Study

    No full text

    Number formats, error mitigation, and scope for 16‐bit arithmetics in weather and climate modeling analyzed with a shallow water model

    No full text
    The need for high‐precision calculations with 64‐bit or 32‐bit floating‐point arithmetic for weather and climate models is questioned. Lower‐precision numbers can accelerate simulations and are increasingly supported by modern computing hardware. This paper investigates the potential of 16‐bit arithmetic when applied within a shallow water model that serves as a medium complexity weather or climate application. There are several 16‐bit number formats that can potentially be used (IEEE half precision, BFloat16, posits, integer, and fixed‐point). It is evident that a simple change to 16‐bit arithmetic will not be possible for complex weather and climate applications as it will degrade model results by intolerable rounding errors that cause a stalling of model dynamics or model instabilities. However, if the posit number format is used as an alternative to the standard floating‐point numbers, the model degradation can be significantly reduced. Furthermore, mitigation methods, such as rescaling, reordering, and mixed precision, are available to make model simulations resilient against a precision reduction. If mitigation methods are applied, 16‐bit floating‐point arithmetic can be used successfully within the shallow water model. The results show the potential of 16‐bit formats for at least parts of complex weather and climate models where rounding errors would be entirely masked by initial condition, model, or discretization error

    Atmosphere and Ocean Modeling on Grids of Variable Resolution—A 2D Case Study

    Get PDF
    Grids of variable resolution are of great interest in Atmosphere and Ocean Modeling as they offer a route to higher local resolution and improved solutions. On the other hand there are changes in grid resolution considered to be problematic because of the errors they create between coarse and fine parts of a grid due to reflection and scattering of waves. On complex multidimensional domains these errors resist theoretical investigation and demand numerical experiments. With a low-order hybrid continuous/discontinuous finite element model of the inviscid and viscous shallow-water equations a numerical study is carried out that investigates the influence of grid refinement on critical features such as wave propagation, turbulent cascades and the representation of geostrophic balance. The refinement technique we use is static h-refinement, where additional grid cells are inserted in regions of interest known a priori. The numerical tests include planar and spherical geometry as well as flows with boundaries and are chosen to address the impact of abrupt changes in resolution or the influence of the shape of the transition zone. For the specific finite element model under investigation, the simulations suggest that grid refinement does not deteriorate geostrophic balance and turbulent cascades and the shape of mesh transition zones appears to be less important than expected. However, our results show that the static local refinement is able to reduce the local error, but not necessarily the global error and convergence properties with resolution are changed. Our relatively simple tests already illustrate that grid refinement has to go along with a simultaneous change of the parametrization schemes

    Posits as an alternative to floats for weather and climate models

    No full text
    Posit numbers, a recently proposed alternative to floating-point numbers, claim to have smaller arithmetic rounding errors in many applications. By studying weather and climate models of low and medium complexity (the Lorenz system and a shallow water model) we present benefits of posits compared to floats at 16 bit. As a standardised posit processor does not exist yet, we emulate posit arithmetic on a conventional CPU. Using a shallow water model, forecasts based on 16-bit posits with 1 or 2 exponent bits are clearly more accurate than half precision floats. We therefore propose 16 bit with 2 exponent bits as a standard posit format, as its wide dynamic range of 32 orders of magnitude provides a great potential for many weather and climate models. Although the focus is on geophysical fluid simulations, the results are also meaningful and promising for reduced precision posit arithmetic in the wider field of computational fluid dynamics

    Posits as an alternative to floats for weather and climate models

    No full text
    Posit numbers, a recently proposed alternative to floating-point numbers, claim to have smaller arithmetic rounding errors in many applications. By studying weather and climate models of low and medium complexity (the Lorenz system and a shallow water model) we present benefits of posits compared to floats at 16 bit. As a standardised posit processor does not exist yet, we emulate posit arithmetic on a conventional CPU. Using a shallow water model, forecasts based on 16-bit posits with 1 or 2 exponent bits are clearly more accurate than half precision floats. We therefore propose 16 bit with 2 exponent bits as a standard posit format, as its wide dynamic range of 32 orders of magnitude provides a great potential for many weather and climate models. Although the focus is on geophysical fluid simulations, the results are also meaningful and promising for reduced precision posit arithmetic in the wider field of computational fluid dynamics

    A study of reduced numerical precision to make superparameterization more competitive using a hardware emulator in the OpenIFS model

    No full text
    The use of reduced numerical precision to reduce computing costs for the cloud resolving model of superparameterised simulations of the atmosphere is investigated. An approach to identify the optimal level of precision for many different model components is presented and a detailed analysis of precision is performed. This is non-trivial for a complex model that shows chaotic behaviour such as the cloud resolving model in this paper. results of the reduced precision analysis provide valuable information for the quantification of model uncertainty for individual model components. The precision analysis is also used to identify model parts that are of less importance thus enabling a reduction of model complexity. It is shown that the precision analysis can be used to improve model efficiency for both simulations in double precision and in reduced precision. Model simulations are performed with a superparametrised single-column model version of the OpenIFS model that is forced by observational datasets. A software emulator was used to mimic the use of reduced precision floating point arithmetic in simulations.</p

    Improving weather forecast skill through reduced precision data assimilation

    No full text
    A new approach for improving the accuracy of data assimilation, by trading numerical precision for ensemble size, is introduced. Data assimilation is inherently uncertain due to the use of noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, computational resources can be redistributed towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localization, lowering precision could actually permit an improvement in the accuracy of weather forecasts. Here, this idea is tested on an ensemble data assimilation system comprising the Lorenz ’96 toy atmospheric model and the ensemble square root filter. The system is run at double, single and half precision (the latter using an emulation tool), and the performance of each precision is measured through mean error statistics and rank histograms. The sensitivity of these results to the observation error and the length of the observation window are addressed. Then, by reinvesting the saved computational resources from reducing precision into the ensemble size, assimilation error can be reduced for (hypothetically) no extra cost. This results in increased forecasting skill, with respect to double precision assimilation

    Reliable low precision simulations in land surface models

    No full text
    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision

    Improving weather forecast skill through reduced precision data assimilation

    No full text
    A new approach for improving the accuracy of data assimilation, by trading numerical precision for ensemble size, is introduced. Data assimilation is inherently uncertain due to the use of noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, computational resources can be redistributed towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localization, lowering precision could actually permit an improvement in the accuracy of weather forecasts. Here, this idea is tested on an ensemble data assimilation system comprising the Lorenz ’96 toy atmospheric model and the ensemble square root filter. The system is run at double, single and half precision (the latter using an emulation tool), and the performance of each precision is measured through mean error statistics and rank histograms. The sensitivity of these results to the observation error and the length of the observation window are addressed. Then, by reinvesting the saved computational resources from reducing precision into the ensemble size, assimilation error can be reduced for (hypothetically) no extra cost. This results in increased forecasting skill, with respect to double precision assimilation
    corecore