55 research outputs found

    Lower precision for higher accuracy: precision and resolution exploration for shallow water equations

    No full text
    Accurate forecasts of future climate with numerical models of atmosphere and ocean are of vital importance. However, forecast quality is often limited by the available computational power. This paper investigates the acceleration of a C-grid shallow water model through the use of reduced precision targeting FPGA technology. Using a double-gyre scenario, we show that the mantissa length of variables can be reduced to 14 bits without affecting the accuracy beyond the error inherent in the model. Our reduced precision FPGA implementation runs 5.4 times faster than a double precision FPGA implementation, and 12 times faster than a multi-Threaded CPU implementation. Moreover, our reduced precision FPGA implementation uses 39 times less energy than the CPU implementation and can compute a 100×100 grid for the same energy that the CPU implementation would take for a 29×29 grid

    Reduced-precision parametrization: lessons from an intermediate-complexity atmospheric model

    No full text
    Reducing numerical precision can save computational costs which can then be reinvested for more useful purposes. This study considers the effects of reducing precision in the parametrizations of an intermediate complexity atmospheric model (SPEEDY). We find that the difference between double-precision and reduced-precision parametrization tendencies is proportional to the expected machine rounding error if individual timesteps are considered. However, if reduced precision is used in simulations that are compared to double-precision simulations, a range of precision is found where differences are approximately the same for all simulations. Here, rounding errors are small enough to not directly perturb the model dynamics, but can perturb conditional statements in the parametrizations (such as convection active/inactive) leading to a similar error growth for all runs. For lower precision, simulations are perturbed significantly. Precision cannot be constrained without some quantification of the uncertainty. The inherent uncertainty in numerical weather and climate models is often explicitly considered in simulations by stochastic schemes that will randomly perturb the parametrizations. A commonly used scheme is stochastic perturbation of parametrization tendencies (SPPT). A strong test on whether a precision is acceptable is whether a low-precision ensemble produces the same probability distribution as a double-precision ensemble where the only difference between ensemble members is the model uncertainty (i.e., the random seed in SPPT). Tests with SPEEDY suggest a precision as low as 3.5 decimal places (equivalent to half precision) could be acceptable, which is surprisingly close to the lowest precision that produces similar error growth in the experiments without SPPT mentioned above. Minor changes to model code to express variables as anomalies rather than absolute values reduce rounding errors and low-precision biases, allowing even lower precision to be used. These results provide a pathway for implementing reduced-precision parametrizations in more complex weather and climate models

    Scale-selective precision for weather and climate forecasting

    No full text
    Attempts to include the vast range of length scales and physical processes at play in Earth's atmosphere push weather and climate forecasters to build andmore efficiently utilize some of themost powerful computers in the world.One possible avenue for increased efficiency is in using less precise numerical representations of numbers. If computing resources saved can be reinvested in other ways (e.g., increased resolution or ensemble size) a reduction in precision can lead to an increase in forecast accuracy. Here we examine reduced numerical precision in the context of ECMWF's Open Integrated Forecast System (OpenIFS) model. We posit that less numerical precision is required when solving the dynamical equations for shorter length scales while retaining accuracy of the simulation. Transformations into spectral space, as found in spectral models such as OpenIFS, enact a length scale decomposition of the prognostic fields. Utilizing this, we introduce a reduced-precision emulator into the spectral space calculations and optimize the precision necessary to achieve forecasts comparable with double and single precision. On weather forecasting time scales, larger length scales require higher numerical precision than smaller length scales. On decadal time scales, half precision is still sufficient precision for everything except the global mean quantities

    Accelerating high-resolution weather models with deep-learning hardware

    No full text
    The next generation of weather and climate models will have an unprecedented level of resolution and model complexity, and running these models efficiently will require taking advantage of future supercomputers and heterogeneous hardware. In this paper, we investigate the use of mixed-precision hardware that supports floating-point operations at double-, single- and half-precision. In particular, we investigate the potential use of the NVIDIA Tensor Core, a mixed-precision matrix-matrix multiplier mainly developed for use in deep learning, to accelerate the calculation of the Legendre transforms in the Integrated Forecasting System (IFS), one of the leading global weather forecast models. In the IFS, the Legendre transform is one of the most expensive model components and dominates the computational cost for simulations at a very high resolution. We investigate the impact of mixed-precision arithmetic in IFS simulations of operational complexity through software emulation. Through a targeted but minimal use of double-precision arithmetic we are able to use either half-precision arithmetic or mixed half/single-precision arithmetic for almost all of the calculations in the Legendre transform without affecting forecast skill

    Scale-selective precision for weather and climate forecasting

    No full text
    Attempts to include the vast range of length scales and physical processes at play in Earth's atmosphere push weather and climate forecasters to build andmore efficiently utilize some of themost powerful computers in the world.One possible avenue for increased efficiency is in using less precise numerical representations of numbers. If computing resources saved can be reinvested in other ways (e.g., increased resolution or ensemble size) a reduction in precision can lead to an increase in forecast accuracy. Here we examine reduced numerical precision in the context of ECMWF's Open Integrated Forecast System (OpenIFS) model. We posit that less numerical precision is required when solving the dynamical equations for shorter length scales while retaining accuracy of the simulation. Transformations into spectral space, as found in spectral models such as OpenIFS, enact a length scale decomposition of the prognostic fields. Utilizing this, we introduce a reduced-precision emulator into the spectral space calculations and optimize the precision necessary to achieve forecasts comparable with double and single precision. On weather forecasting time scales, larger length scales require higher numerical precision than smaller length scales. On decadal time scales, half precision is still sufficient precision for everything except the global mean quantities

    Accelerating high-resolution weather models with deep-learning hardware

    No full text
    The next generation of weather and climate models will have an unprecedented level of resolution and model complexity, and running these models efficiently will require taking advantage of future supercomputers and heterogeneous hardware. In this paper, we investigate the use of mixed-precision hardware that supports floating-point operations at double-, single- and half-precision. In particular, we investigate the potential use of the NVIDIA Tensor Core, a mixed-precision matrix-matrix multiplier mainly developed for use in deep learning, to accelerate the calculation of the Legendre transforms in the Integrated Forecasting System (IFS), one of the leading global weather forecast models. In the IFS, the Legendre transform is one of the most expensive model components and dominates the computational cost for simulations at a very high resolution. We investigate the impact of mixed-precision arithmetic in IFS simulations of operational complexity through software emulation. Through a targeted but minimal use of double-precision arithmetic we are able to use either half-precision arithmetic or mixed half/single-precision arithmetic for almost all of the calculations in the Legendre transform without affecting forecast skill

    Clinical Chemistry Principle Procedures Correlations

    No full text
    xviii;ill.;604hal.;30c

    Unexpected Quasi‐Axial Conformer in Thermally Activated Delayed Fluorescence DMAC‐TRZ, Pushing Green OLEDs to Blue

    Get PDF
    Hidden photophysics is elucidated in the very well-known thermally activated delayed fluorescence (TADF) emitter, DMAC-TRZ. A molecule that, based on its structure, is considered not to have more than one structural conformation. However, based on experimental and computational studies, two conformers, a quasi-axial (QA) and a quasi-equatorial (QE) are found, and the effect of their co-existence on both optical and electrical excitation isexplored. The relative small population of the QA conformer has a disproportionate effect because of its strong local excited state character. The energy transfer efficiency from the QA to the QE conformer is high, even at low concentrations, dependent on the host environment. The current accepted triplet energy of DMAC-TRZ is shown to originate from the QA conformer, completely changing the understanding of DMAC-TRZ. The contribution of the QA conformer in devices helps to explain the good performance of the material in non-doped organic light-emitting diodes (OLEDs). Moreover, hyperfluorescence (HF) devices, using v-DABNA emitter show direct energy transfer from the QA conformer to v-DABNA, explaining the relatively improved Förster resonance energy transfer efficiency compared to similar HF systems. Highly efficient OLEDs where green light (TADF-only devices) is converted to blue light (HF devices) with the maximum external quantum efficiency remaining close to 30% are demonstrated
    corecore