281 research outputs found

    Enable High-resolution, Real-time Ensemble Simulation and Data Assimilation of Flood Inundation using Distributed GPU Parallelization

    Full text link
    Numerical modeling of the intensity and evolution of flood events are affected by multiple sources of uncertainty such as precipitation and land surface conditions. To quantify and curb these uncertainties, an ensemble-based simulation and data assimilation model for pluvial flood inundation is constructed. The shallow water equation is decoupled in the x and y directions, and the inertial form of the Saint-Venant equation is chosen to realize fast computation. The probability distribution of the input and output factors is described using Monte Carlo samples. Subsequently, a particle filter is incorporated to enable the assimilation of hydrological observations and improve prediction accuracy. To achieve high-resolution, real-time ensemble simulation, heterogeneous computing technologies based on CUDA (compute unified device architecture) and a distributed storage multi-GPU (graphics processing unit) system are used. Multiple optimization skills are employed to ensure the parallel efficiency and scalability of the simulation program. Taking an urban area of Fuzhou, China as an example, a model with a 3-m spatial resolution and 4.0 million units is constructed, and 8 Tesla P100 GPUs are used for the parallel calculation of 96 model instances. Under these settings, the ensemble simulation of a 1-hour hydraulic process takes 2.0 minutes, which achieves a 2680 estimated speedup compared with a single-thread run on CPU. The calculation results indicate that the particle filter method effectively constrains simulation uncertainty while providing the confidence intervals of key hydrological elements such as streamflow, submerged area, and submerged water depth. The presented approaches show promising capabilities in handling the uncertainties in flood modeling as well as enhancing prediction efficiency

    Massively parallel landscape-evolution modelling using general purpose graphical processing units

    Get PDF
    As our expectations of what computer systems can do and our ability to capture data improves, the desire to perform ever more computationally intensive tasks increases. Often these tasks, comprising vast numbers of repeated computations, are highly interdependent on each other – a closely coupled problem. The process of Landscape-Evolution Modelling is an example of such a problem. In order to produce realistic models it is necessary to process landscapes containing millions of data points over time periods extending up to millions of years. This leads to non-tractable execution times, often in the order of years. Researchers therefore seek multiple orders of magnitude reduction in the execution time of these models. The massively parallel programming environment offered through General Purpose Graphical Processing Units offers the potential for multiple orders of magnitude speedup in code execution times. In this paper we demonstrate how the time dominant parts of a Landscape-Evolution Model can be recoded for a massively parallel architecture providing two orders of magnitude reduction in execution time

    Advancement of Computing on Large Datasets via Parallel Computing and Cyberinfrastructure

    Get PDF
    Large datasets require efficient processing, storage and management to efficiently extract useful information for innovation and decision-making. This dissertation demonstrates novel approaches and algorithms using virtual memory approach, parallel computing and cyberinfrastructure. First, we introduce a tailored user-level virtual memory system for parallel algorithms that can process large raster data files in a desktop computer environment with limited memory. The application area for this portion of the study is to develop parallel terrain analysis algorithms that use multi-threading to take advantage of common multi-core processors for greater efficiency. Second, we present two novel parallel WaveCluster algorithms that perform cluster analysis by taking advantage of discrete wavelet transform to reduce large data to coarser representations so data is smaller and more easily managed than the original data in size and complexity. Finally, this dissertation demonstrates an HPC gateway service that abstracts away many details and complexities involved in the use of HPC systems including authentication, authorization, and data and job management

    Development of a GPGPU accelerated tool to simulate advection-reaction-diffusion phenomena in 2D

    Get PDF
    Computational models are powerful tools to the study of environmental systems, playing a fundamental role in several fields of research (hydrological sciences, biomathematics, atmospheric sciences, geosciences, among others). Most of these models require high computational capacity, especially when one considers high spatial resolution and the application to large areas. In this context, the exponential increase in computational power brought by General Purpose Graphics Processing Units (GPGPU) has drawn the attention of scientists and engineers to the development of low cost and high performance parallel implementations of environmental models. In this research, we apply GPGPU computing for the development of a model that describes the physical processes of advection, reaction and diffusion. This presentation is held in the form of three self-contained articles. In the first one, we present a GPGPU implementation for the solution of the 2D groundwater flow equation in unconfined aquifers for heterogenous and anisotropic media. We implement a finite difference solution scheme based on the Crank- Nicolson method and show that the GPGPU accelerated solution implemented using CUDA C/C++ (Compute Unified Device Architecture) greatly outperforms the corresponding serial solution implemented in C/C++. The results show that accelerated GPGPU implementation is capable of delivering up to 56 times acceleration in the solution process using an ordinary office computer. In the second article, we study the application of a diffusive-logistic growth (DLG) model to the problem of forest growth and regeneration. The study focuses on vegetation belonging to preservation areas, such as riparian buffer zones. The study was developed in two stages: (i) a methodology based on Artificial Neural Network Ensembles (ANNE) was applied to evaluate the width of riparian buffer required to filter 90% of the residual nitrogen; (ii) the DLG model was calibrated and validated to generate a prognostic of forest regeneration in riparian protection bands considering the minimum widths indicated by the ANNE. The solution was implemented in GPGPU and it was applied to simulate the forest regeneration process for forty years on the riparian protection bands along the Ligeiro river, in Brazil. The results from calibration and validation showed that the DLG model provides fairly accurate results for the modelling of forest regeneration. In the third manuscript, we present a GPGPU implementation of the solution of the advection-reaction-diffusion equation in 2D. The implementation is designed to be general and flexible to allow the modeling of a wide range of processes, including those with heterogeneity and anisotropy. We show that simulations performed in GPGPU allow the use of mesh grids containing more than 20 million points, corresponding to an area of 18,000 km? in a standard Landsat image resolution.Os modelos computacionais s?o ferramentas poderosas para o estudo de sistemas ambientais, desempenhando um papel fundamental em v?rios campos de pesquisa (ci?ncias hidrol?gicas, biomatem?tica, ci?ncias atmosf?ricas, geoci?ncias, entre outros). A maioria desses modelos requer alta capacidade computacional, especialmente quando se considera uma alta resolu??o espacial e a aplica??o em grandes ?reas. Neste contexto, o aumento exponencial do poder computacional trazido pelas Unidades de Processamento de Gr?ficos de Prop?sito Geral (GPGPU) chamou a aten??o de cientistas e engenheiros para o desenvolvimento de implementa??es paralelas de baixo custo e alto desempenho para modelos ambientais. Neste trabalho, aplicamos computa??o em GPGPU para o desenvolvimento de um modelo que descreve os processos f?sicos de advec??o, rea??o e difus?o. Esta disserta??o ? apresentada sob a forma de tr?s artigos. No primeiro, apresentamos uma implementa??o em GPGPU para a solu??o da equa??o de fluxo de ?guas subterr?neas 2D em aqu?feros n?o confinados para meios heterog?neos e anisotr?picos. Foi implementado um esquema de solu??o de diferen?as finitas com base no m?todo Crank- Nicolson e mostramos que a solu??o acelerada GPGPU implementada usando CUDA C / C ++ supera a solu??o serial correspondente implementada em C / C ++. Os resultados mostram que a implementa??o acelerada por GPGPU ? capaz de fornecer acelera??o de at? 56 vezes no processo da solu??o usando um computador de escrit?rio comum. No segundo artigo estudamos a aplica??o de um modelo de crescimento log?stico difusivo (DLG) ao problema de crescimento e regenera??o florestal. O estudo foi desenvolvido em duas etapas: (i) Aplicou-se uma metodologia baseada em Comites de Rede Neural Artificial (ANNE) para avaliar a largura da faixa de prote??o rip?ria necess?ria para filtrar 90% do nitrog?nio residual; (ii) O modelo DLG foi calibrado e validado para gerar um progn?stico de regenera??o florestal em faixas de prote??o rip?rias considerando as larguras m?nimas indicadas pela ANNE. A solu??o foi implementada em GPGPU e aplicada para simular o processo de regenera??o florestal para um per?odo de quarenta anos na faixa de prote??o rip?ria ao longo do rio Ligeiro, no Brasil. Os resultados da calibra??o e valida??o mostraram que o modelo DLG fornece resultados bastante precisos para a modelagem de regenera??o florestal. No terceiro artigo, apresenta-se uma implementa??o em GPGPU para solu??o da equa??o advec??o-rea??o-difus?o em 2D. A implementa??o ? projetada para ser geral e flex?vel para permitir a modelagem de uma ampla gama de processos, incluindo caracter?sticas como heterogeneidade e anisotropia do meio. Neste trabalho mostra-se que as simula??es realizadas em GPGPU permitem o uso de malhas contendo mais de 20 milh?es de pontos (vari?veis), correspondendo a uma ?rea de 18.000 km? em resolu??o de 30m padr?o das imagens Landsat

    Automated Translation and Accelerated Solving of Differential Equations on Multiple GPU Platforms

    Full text link
    We demonstrate a high-performance vendor-agnostic method for massively parallel solving of ensembles of ordinary differential equations (ODEs) and stochastic differential equations (SDEs) on GPUs. The method is integrated with a widely used differential equation solver library in a high-level language (Julia's DifferentialEquations.jl) and enables GPU acceleration without requiring code changes by the user. Our approach achieves state-of-the-art performance compared to hand-optimized CUDA-C++ kernels, while performing 20−100×20-100\times faster than the vectorized-map (\texttt{vmap}) approach implemented in JAX and PyTorch. Performance evaluation on NVIDIA, AMD, Intel, and Apple GPUs demonstrates performance portability and vendor-agnosticism. We show composability with MPI to enable distributed multi-GPU workflows. The implemented solvers are fully featured, supporting event handling, automatic differentiation, and incorporating of datasets via the GPU's texture memory, allowing scientists to take advantage of GPU acceleration on all major current architectures without changing their model code and without loss of performance.Comment: 11 figure

    GPU accelerated procedural terrain generation : a thesis presented in partial fulfilment of the requirements for the degree Master of Science in Computer Science at Massey University, Albany, New Zealand

    Get PDF
    Virtual terrain is often used as the large scale background of computer graphics scenes. While virtual terrain is essential for representing landscapes, manual reproduction of such large-scale objects from scratch is time-consuming and costly for human artists. Many algorithmic generation methods have been proposed as an alternative solution to manual reproduction. However, those methods are still limited when needing them to be employed in a wide range of applications. Alternatively, simulation of the stream power equation can effectively model landscape evolution at large temporal and spatial scales by simulating the land-forming process. This equation was successfully employed by a previous study in terrain generation. However, the unoptimised pipeline implementation of the method suffers from long computation time on the increased simulation size. Graphics processing units (GPUs) provide significantly higher computational throughput for massively parallel problems over conventional multi-core CPUs. The previous study proposed a general parallel algorithm to compute the simulation pipeline, but is design for any multi-core hardware and does not fully utilise the computing power of GPUs. This study seeks to develop an optimised pipeline of the original stream power equation method for GPUs. Results showed that the new parallel GPU algorithm consistently had higher performance (about 300% for GTX 780 and 900% for RTX 2070 Super) recent octa-core CPU (Intel i7 9700k 4.9 Ghz). It also consistently showed a 300% improvement in performance over the previous parallel algorithm on GPUs. The new algorithm significantly outperformed the fastest parallel algorithm available, while still being able to produce the same terrain result as the original stream power equation method. This advancement in computational performance allows the algorithm method to generate precise geological details of terrain while providing reasonable computation time for the method to be employed in a broader range of applications

    Doctor of Philosophy

    Get PDF
    dissertationThe goal of this dissertation is to improve flood risk management by enhancing the computational capability of two-dimensional models and incorporating data and parameter uncertainty to more accurately represent flood risk. Improvement of computational performance is accomplished by using the Graphics Processing Unit (GPU) approach, programmed in NVIDIA's Compute Unified Development Architecture (CUDA), to create a new two-dimensional hydrodynamic model, Flood2D-GPU. The model, based on the shallow water equations, is designed to execute simulations faster than the same code programmed using a serial approach (i.e., using a Central Processing Unit (CPU)). Testing the code against an identical CPU-based version demonstrated the improved computational efficiency of the GPU-based version (approximate speedup of more than 80 times). Given the substantial computational efficiency of Flood2D-GPU, a new Monte Carlo based flood risk modeling framework was created. The framework developed operates by performing many Flood2D-GPU simulations using randomly sampled model parameters and input variables. The Monte Carlo flood risk modeling framework is demonstrated in this dissertation by simulating the flood risk associated with a 1% annual probability flood event occurring in the Swannanoa River in Buncombe County near Asheville, North Carolina. The Monte Carlo approach is able to represent a wide range of possible scenarios, thus leading to the identification of areas outside a single simulation inundation extent that are susceptible to flood hazards. Further, the single simulation results underestimated the degree of flood hazard for the case study region when compared to the flood hazard map produced by the Monte Carlo approach. The Monte Carlo flood risk modeling framework is also used to determine the relative benefits of flood management alternatives for flood risk reduction. The objective of the analysis is to investigate the possibility of identifying specific annual exceedance probability flood events that will have greater benefits in terms of annualized flood risk reduction compared to an arbitrarily-selected discrete annual probability event. To test the hypothesis, a study was conducted on the Swannanoa River to determine the distribution of annualized risk as a function of average annual probability. Simulations of samples of flow rate from a continuous flow distribution provided the range of annual probability events necessary. The results showed a variation in annualized risk as a function of annual probability. And as hypothesized, a maximum annualized risk reduction could be identified for a specified annual probability. For the Swannanoa case study, the continuous flow distribution suggested targeting flood proofing to control the 12% exceedance probability event to maximize the reduction of annualized risk. This suggests that the arbitrary use of a specified risk of 1% exceedance may not in some cases be the most efficient allocation of resources to reduce annualized risk

    Automatic drainage pattern recognition in river networks

    Get PDF
    In both geographic information system and terrain analysis, drainage systems are important components. Owing to local topography and subsurface geology, a drainage system achieves a particular drainage pattern based on the form and texture of its network of stream channels and tributaries. Although research has been done on the description of drainage patterns in geography and hydrology, automatic drainage pattern recognition in river networks is not well developed. This article introduces a new method for automatic classification of drainage systems in different patterns. The method applies to river networks, and the terrain model is not required in the process. A series of geometric indicators describing each pattern are introduced. Network classification is based on fuzzy set theory. For each pattern, the level of membership of the network is given by the different indicator values. The method was implemented, and the experimental results are presented and discussed
    • …
    corecore