34 research outputs found

    Weather Projections and Dynamical Downscaling for the Republic of Panama: Evaluation of Implementation Methods via GPGPU Acceleration

    Get PDF
    Climate change could have a critical impact on the Republic of Panama where a major segment of the economy is dependent on the operation of the Panama Canal. New capabilities to do targeted research around climate change impacts on Panama is therefore being established. This includes anew GPU-cluster infrastructure called Iberogun, based around 2 DGX1 servers (each running 16 NVIDIA Tesla P100 GPUs). This infrastructure will be used to evaluate potential climate models and models of extreme weather events. In this review we therefore present an evaluation of the GPGPU (General Purpose Graphic Processing Unit, here abbreviated GPU) implementation methods for the study of weather projections and dynamical downscaling in the Republic of Panama. Different methods are discussed, including: domain-specific languages (DSLs), directive-based porting methods, granularity optimization methods, and memory layout transforming methods. One of these approaches that has yielded interesting previous results is further discussed, a directive-based code transformation method called ‘Hybrid Fortran’ that permits a high-performance GPU port for arranged lattice Fortran codes. Finally, we suggest a method akin to previous investigations related to climate change done for the Republic of Panama, but with acceleration via GPU capabilities.We acknowledge a scientific fund from Sistema Nacional de Investigación de Panamá (SNI) and Projects: FID- 2016-275 and EIE-2018-16 of Convocatorias públicas of Secretaria Nacional de Ciencia y Tecnología e Innovación (SENACYT). We acknowledge funds and support from JSPS Grant-in-Aid for Specially Promoted Research 16H06291. We acknowledge Theme C of the TOUGOU program granted by the Japanese Ministry of Education, Culture, Sports, Science and Technology. The authors thank the Universidad Tecnológica de Panamá their extensive support, and for the use of their CIHH-group HPC-Cluster-Iberogun. Also acknowledge to NVIDIA Corporation with the donation of the Titan Xp GPU used for this research

    Parallel Implementation of Lossy Data Compression for Temporal Data Sets

    Full text link
    Many scientific data sets contain temporal dimensions. These are the data storing information at the same spatial location but different time stamps. Some of the biggest temporal datasets are produced by parallel computing applications such as simulations of climate change and fluid dynamics. Temporal datasets can be very large and cost a huge amount of time to transfer among storage locations. Using data compression techniques, files can be transferred faster and save storage space. NUMARCK is a lossy data compression algorithm for temporal data sets that can learn emerging distributions of element-wise change ratios along the temporal dimension and encodes them into an index table to be concisely represented. This paper presents a parallel implementation of NUMARCK. Evaluated with six data sets obtained from climate and astrophysics simulations, parallel NUMARCK achieved scalable speedups of up to 8788 when running 12800 MPI processes on a parallel computer. We also compare the compression ratios against two lossy data compression algorithms, ISABELA and ZFP. The results show that NUMARCK achieved higher compression ratio than ISABELA and ZFP.Comment: 10 pages, HiPC 201

    The ICON-A model for direct QBO simulations on GPUs (version icon-cscs:baf28a514)

    Get PDF
    Classical numerical models for the global atmosphere, as used for numerical weather forecasting or climate research, have been developed for conventional central processing unit (CPU) architectures. This hinders the employment of such models on current top-performing supercomputers, which achieve their computing power with hybrid architectures, mostly using graphics processing units (GPUs). Thus also scientific applications of such models are restricted to the lesser computer power of CPUs. Here we present the development of a GPU-enabled version of the ICON atmosphere model (ICON-A), motivated by a research project on the quasi-biennial oscillation (QBO), a global-scale wind oscillation in the equatorial stratosphere that depends on a broad spectrum of atmospheric waves, which originates from tropical deep convection. Resolving the relevant scales, from a few kilometers to the size of the globe, is a formidable computational problem, which can only be realized now on top-performing supercomputers. This motivated porting ICON-A, in the specific configuration needed for the research project, in a first step to the GPU architecture of the Piz Daint computer at the Swiss National Supercomputing Centre and in a second step to the JUWELS Booster computer at the Forschungszentrum Jülich. On Piz Daint, the ported code achieves a single-node GPU vs. CPU speedup factor of 6.4 and allows for global experiments at a horizontal resolution of 5 km on 1024 computing nodes with 1 GPU per node with a turnover of 48 simulated days per day. On JUWELS Booster, the more modern hardware in combination with an upgraded code base allows for simulations at the same resolution on 128 computing nodes with 4 GPUs per node and a turnover of 133 simulated days per day. Additionally, the code still remains functional on CPUs, as is demonstrated by additional experiments on the Levante compute system at the German Climate Computing Center. While the application shows good weak scaling over the tested 16-fold increase in grid size and node count, making also higher resolved global simulations possible, the strong scaling on GPUs is relatively poor, which limits the options to increase turnover with more nodes. Initial experiments demonstrate that the ICON-A model can simulate downward-propagating QBO jets, which are driven by wave–mean flow interaction

    THOR 2.0: Major Improvements to the Open-Source General Circulation Model

    Get PDF
    THOR is the first open-source general circulation model (GCM) developed from scratch to study the atmospheres and climates of exoplanets, free from Earth- or Solar System-centric tunings. It solves the general non-hydrostatic Euler equations (instead of the primitive equations) on a sphere using the icosahedral grid. In the current study, we report major upgrades to THOR, building upon the work of Mendon\c{c}a et al. (2016). First, while the Horizontally Explicit Vertically Implicit (HEVI) integration scheme is the same as that described in Mendon\c{c}a et al. (2016), we provide a clearer description of the scheme and improved its implementation in the code. The differences in implementation between the hydrostatic shallow (HSS), quasi-hydrostatic deep (QHD) and non-hydrostatic deep (NHD) treatments are fully detailed. Second, standard physics modules are added: two-stream, double-gray radiative transfer and dry convective adjustment. Third, THOR is tested on additional benchmarks: tidally-locked Earth, deep hot Jupiter, acoustic wave, and gravity wave. Fourth, we report that differences between the hydrostatic and non-hydrostatic simulations are negligible in the Earth case, but pronounced in the hot Jupiter case. Finally, the effects of the so-called "sponge layer", a form of drag implemented in most GCMs to provide numerical stability, are examined. Overall, these upgrades have improved the flexibility, user-friendliness, and stability of THOR.Comment: 57 pages, 31 figures, revised, accepted for publication in ApJ
    corecore