96 research outputs found

    Classification of the difficulty in accelerating problems using GPUs

    Get PDF
    Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration

    Classification of the difficulty in accelerating problems using GPUs

    Get PDF
    Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration

    Scaling Coupled Climate Models to Exascale: OpenACC-enabled ECEarth3 Earth System Model

    Get PDF
    Climate change due to increasing anthropogenic greenhouse gases and land surface change is currently one of the most relevant environmental concerns. It threatens ecosystems and human societies. However, its impact on the economy and our living standards depends largely on our ability to anticipate its effects and take appropriate action. Earth System Models (ESMs), such as EC-Earth, can be used to provide society with information on the future climate. EC-Earth3 generates reliable predictions and projections of global climate change, which are a prerequisite to support the development of national adaptation and mitigation strategies. This project investigates methods to enhance the parallel capabilities of EC-Earth3 by offloading bottleneck routines to GPUs and Intel Xeon Phi coprocessors. To gain a full understanding of climate change at a regional scale will require EC-Earth3 to be run at a much higher spatial resolution (T3999 ~5km) than is currently feasible. It is envisaged that the work outlined in this project will provide climate scientists with valuable data for simulations planned for future exascale systems

    Enable High-resolution, Real-time Ensemble Simulation and Data Assimilation of Flood Inundation using Distributed GPU Parallelization

    Full text link
    Numerical modeling of the intensity and evolution of flood events are affected by multiple sources of uncertainty such as precipitation and land surface conditions. To quantify and curb these uncertainties, an ensemble-based simulation and data assimilation model for pluvial flood inundation is constructed. The shallow water equation is decoupled in the x and y directions, and the inertial form of the Saint-Venant equation is chosen to realize fast computation. The probability distribution of the input and output factors is described using Monte Carlo samples. Subsequently, a particle filter is incorporated to enable the assimilation of hydrological observations and improve prediction accuracy. To achieve high-resolution, real-time ensemble simulation, heterogeneous computing technologies based on CUDA (compute unified device architecture) and a distributed storage multi-GPU (graphics processing unit) system are used. Multiple optimization skills are employed to ensure the parallel efficiency and scalability of the simulation program. Taking an urban area of Fuzhou, China as an example, a model with a 3-m spatial resolution and 4.0 million units is constructed, and 8 Tesla P100 GPUs are used for the parallel calculation of 96 model instances. Under these settings, the ensemble simulation of a 1-hour hydraulic process takes 2.0 minutes, which achieves a 2680 estimated speedup compared with a single-thread run on CPU. The calculation results indicate that the particle filter method effectively constrains simulation uncertainty while providing the confidence intervals of key hydrological elements such as streamflow, submerged area, and submerged water depth. The presented approaches show promising capabilities in handling the uncertainties in flood modeling as well as enhancing prediction efficiency

    Integrated High-Resolution Modeling for Operational Hydrologic Forecasting

    Get PDF
    Current advances in Earth-sensing technologies, physically-based modeling, and computational processing, offer the promise of a major revolution in hydrologic forecasting—with profound implications for the management of water resources and protection from related disasters. However, access to the necessary capabilities for managing information from heterogeneous sources, and for its deployment in robust-enough modeling engines, remains the province of large governmental agencies. Moreover, even within this type of centralized operations, success is still challenged by the sheer computational complexity associated with overcoming uncertainty in the estimation of parameters and initial conditions in large-scale or high-resolution models. In this dissertation we seek to facilitate the access to hydrometeorological data products from various U.S. agencies and to advanced watershed modeling tools through the implementation of a lightweight GIS-based software package. Accessible data products currently include gauge, radar, and satellite precipitation; stream discharge; distributed soil moisture and snow cover; and multi-resolution weather forecasts. Additionally, we introduce a suite of open-source methods aimed at the efficient parameterization and initialization of complex geophysical models in contexts of high uncertainty, scarce information, and limited computational resources. The developed products in this suite include: 1) model calibration based on state of the art ensemble evolutionary Pareto optimization, 2) automatic parameter estimation boosted through the incorporation of expert criteria, 3) data assimilation that hybridizes particle smoothing and variational strategies, 4) model state compression by means of optimized clustering, 5) high-dimensional stochastic approximation of watershed conditions through a novel lightweight Gaussian graphical model, and 6) simultaneous estimation of model parameters and states for hydrologic forecasting applications. Each of these methods was tested using established distributed physically-based hydrologic modeling engines (VIC and the DHSVM) that were applied to watersheds in the U.S. of different sizes—from a small highly-instrumented catchment in Pennsylvania, to the basin of the Blue River in Oklahoma. A series of experiments was able to demonstrate statistically-significant improvements in the predictive accuracy of the proposed methods in contrast with traditional approaches. Taken together, these accessible and efficient tools can therefore be integrated within various model-based workflows for complex operational applications in water resources and beyond

    Implementing uncertainty analysis in water resources assessment and planning

    Get PDF
    The main objective of the project was to contribute to the incorporation of uncer-tainty assessments in practical water resource decision-making in South Africa. There are three main components to this objective. The first is the quantification of realistic levels of uncertainty that are as low as possible given the available infor-mation (reducing uncertainty). The second is the availability of tools to implement uncertainty analysis across the broad spectrum of data analysis and modelling plat-forms that form part of practical water resources assessment (including hydrologi-cal and water resources yield models). The third relates to the issue of using uncer-tain information in the process of making decisions about the design, development or operation of water resources systems. The latter includes social, political and economic uncertainties as well as the hydrological uncertainties that are directly addressed in this report. None of these are independent and all are associated with the fundamental issue that all of the role players should understand the key con-cepts of uncertainty and that virtually all of the information we use to make deci-sions is uncertain. One of the major challenges in this project as well as the previous WRC-supported project on uncertainty methods, was the lack of understanding of some of the key issues, or a lack of appreciation of the importance of uncertainty in all water resources decision-making
    • …
    corecore