284 research outputs found

    Parallelization of a two-dimensional flood inundation model based on domain decomposition

    Get PDF
    Flood modelling often involves prediction of the inundated extent over large spatial and temporal scales. As the dimensionality of the system and the complexity of the problems increase, the need to obtain quick solutions becomes a priority. However, for large-scale problems or situations where fine resolution data is required, it is often not possible or practical to run the model on a single computer in a reasonable timeframe. This paper presents the development and testing of a parallelized 2D diffusion-based flood inundation model (FloodMap-Parallel) which enables largescale simulations to be run on distributed multi-processors. The model has been applied to three locations in the UK with different flow and topographical boundary conditions. The accuracy of the parallelized model and its computational efficiency have been tested. The predictions obtained from the parallelized model match those obtained from the serialized simulations. The computational performance of the model has been investigated in relation to the granularity of the domain decomposition, the total number of cells and the domain decomposition configuration pattern. Results show that the parallelized model is more effective with simulations of low granularity and a large number of cells. The large communication overhead associated with the potential loadimbalance between sub-domains is a major bottleneck in utilizing this approach with higher domain granularity

    Enable High-resolution, Real-time Ensemble Simulation and Data Assimilation of Flood Inundation using Distributed GPU Parallelization

    Full text link
    Numerical modeling of the intensity and evolution of flood events are affected by multiple sources of uncertainty such as precipitation and land surface conditions. To quantify and curb these uncertainties, an ensemble-based simulation and data assimilation model for pluvial flood inundation is constructed. The shallow water equation is decoupled in the x and y directions, and the inertial form of the Saint-Venant equation is chosen to realize fast computation. The probability distribution of the input and output factors is described using Monte Carlo samples. Subsequently, a particle filter is incorporated to enable the assimilation of hydrological observations and improve prediction accuracy. To achieve high-resolution, real-time ensemble simulation, heterogeneous computing technologies based on CUDA (compute unified device architecture) and a distributed storage multi-GPU (graphics processing unit) system are used. Multiple optimization skills are employed to ensure the parallel efficiency and scalability of the simulation program. Taking an urban area of Fuzhou, China as an example, a model with a 3-m spatial resolution and 4.0 million units is constructed, and 8 Tesla P100 GPUs are used for the parallel calculation of 96 model instances. Under these settings, the ensemble simulation of a 1-hour hydraulic process takes 2.0 minutes, which achieves a 2680 estimated speedup compared with a single-thread run on CPU. The calculation results indicate that the particle filter method effectively constrains simulation uncertainty while providing the confidence intervals of key hydrological elements such as streamflow, submerged area, and submerged water depth. The presented approaches show promising capabilities in handling the uncertainties in flood modeling as well as enhancing prediction efficiency

    River network routing on the NHDPlus dataset

    No full text
    International audienceThe mapped rivers and streams of the contiguous United States are available in a geographic information system (GIS) dataset called National Hydrography Dataset Plus (NHDPlus). This hydrographic dataset has about 3 million river and water body reaches along with information on how they are connected into net- works. The U.S. Geological Survey (USGS) National Water Information System (NWIS) provides stream- flow observations at about 20 thousand gauges located on theNHDPlus river network.Ariver networkmodel called Routing Application for Parallel Computation of Discharge (RAPID) is developed for the NHDPlus river network whose lateral inflow to the river network is calculated by a land surface model. A matrix-based version of the Muskingum method is developed herein, which RAPID uses to calculate flow and volume of water in all reaches of a river network with many thousands of reaches, including at ungauged locations. Gauges situated across river basins (not only at basin outlets) are used to automatically optimize the Muskingum parameters and to assess river flow computations, hence allowing the diagnosis of runoff com- putations provided by land surfacemodels.RAPIDis applied to theGuadalupe and SanAntonioRiver basins in Texas, where flow wave celerities are estimated at multiple locations using 15-min data and can be reproduced reasonably with RAPID. This river model can be adapted for parallel computing and although the matrix method initially adds a large overhead, river flow results can be obtained faster than with the traditionalMuskingummethod when using a few processing cores, as demonstrated in a synthetic study using the upper Mississippi River basin

    Doctor of Philosophy

    Get PDF
    dissertationThe goal of this dissertation is to improve flood risk management by enhancing the computational capability of two-dimensional models and incorporating data and parameter uncertainty to more accurately represent flood risk. Improvement of computational performance is accomplished by using the Graphics Processing Unit (GPU) approach, programmed in NVIDIA's Compute Unified Development Architecture (CUDA), to create a new two-dimensional hydrodynamic model, Flood2D-GPU. The model, based on the shallow water equations, is designed to execute simulations faster than the same code programmed using a serial approach (i.e., using a Central Processing Unit (CPU)). Testing the code against an identical CPU-based version demonstrated the improved computational efficiency of the GPU-based version (approximate speedup of more than 80 times). Given the substantial computational efficiency of Flood2D-GPU, a new Monte Carlo based flood risk modeling framework was created. The framework developed operates by performing many Flood2D-GPU simulations using randomly sampled model parameters and input variables. The Monte Carlo flood risk modeling framework is demonstrated in this dissertation by simulating the flood risk associated with a 1% annual probability flood event occurring in the Swannanoa River in Buncombe County near Asheville, North Carolina. The Monte Carlo approach is able to represent a wide range of possible scenarios, thus leading to the identification of areas outside a single simulation inundation extent that are susceptible to flood hazards. Further, the single simulation results underestimated the degree of flood hazard for the case study region when compared to the flood hazard map produced by the Monte Carlo approach. The Monte Carlo flood risk modeling framework is also used to determine the relative benefits of flood management alternatives for flood risk reduction. The objective of the analysis is to investigate the possibility of identifying specific annual exceedance probability flood events that will have greater benefits in terms of annualized flood risk reduction compared to an arbitrarily-selected discrete annual probability event. To test the hypothesis, a study was conducted on the Swannanoa River to determine the distribution of annualized risk as a function of average annual probability. Simulations of samples of flow rate from a continuous flow distribution provided the range of annual probability events necessary. The results showed a variation in annualized risk as a function of annual probability. And as hypothesized, a maximum annualized risk reduction could be identified for a specified annual probability. For the Swannanoa case study, the continuous flow distribution suggested targeting flood proofing to control the 12% exceedance probability event to maximize the reduction of annualized risk. This suggests that the arbitrary use of a specified risk of 1% exceedance may not in some cases be the most efficient allocation of resources to reduce annualized risk

    Friction decoupling and loss of rotational invariance in flooding models

    Full text link
    Friction decoupling, i.e. the computation of friction vector components making separate use of the corresponding velocity components, is common in staggered grid models of the SWE simplifications (Zero-Inertia and Local Inertia Approximation), due to the programming simplicity and to the consequent calculations speed-up. In the present paper, the effect of friction decoupling has been studied from the theoretical and numerical point of view. First, it has been found that friction vector decoupling causes the reduction of the computed friction force and the rotation of the friction force vector. Second, it has been demonstrated that decoupled-friction models lack of rotational invariance, i.e. model results depend on the alignment of the reference framework. These theoretical results have been confirmed by means of numerical experiments. On this basis, it is evident that the decoupling of the friction vector causes a major loss of credibility of the corresponding mathematical and numerical models. Despite the modest speed-up of decoupled-friction computations, classic coupled-friction models should be preferred in every case

    An accelerated tool for flood modelling based on Iber

    Get PDF
    Este artigo inclúese no número especial "Selected Papers from the 1st International Electronic Conference on the Hydrological Cycle (ChyCle-2017)"[Abstract:] This paper presents Iber+, a new parallel code based on the numerical model Iber for two-dimensional (2D) flood inundation modelling. The new implementation, which is coded in C++ and takes advantage of the parallelization functionalities both on CPUs (central processing units) and GPUs (graphics processing units), was validated using different benchmark cases and compared, in terms of numerical output and computational efficiency, with other well-known hydraulic software packages. Depending on the complexity of the specific test case, the new parallel implementation can achieve speedups up to two orders of magnitude when compared with the standard version. The speedup is especially remarkable for the GPU parallelization that uses Nvidia CUDA (compute unified device architecture). The efficiency is as good as the one provided by some of the most popular hydraulic models. We also present the application of Iber+ to model an extreme flash flood that took place in the Spanish Pyrenees in October 2012. The new implementation was used to simulate 24 h of real time in roughly eight minutes of computing time, while the standard version needed more than 15 h. This huge improvement in computational efficiency opens up the possibility of using the code for real-time forecasting of flood events in early-warning systems, in order to help decision making under hazardous events that need a fast intervention to deploy countermeasures.Water JPI—WaterWorks Programme, project Improving Drought and Flood Early Warning, Forecasting and Mitigation, IMDROFLOOD; PCIN-2015-243European Commission; project RISC_ML 034_RISC_ML_6_EXunta de Galicia; ED431C 2017/64-GRCXunta de Galicia; ED481A-2017/314Xunta de Galicia; ED481B-2018/020European Commission; IMDROFLOOD PCIN-2015-24

    Report from the MPP Working Group to the NASA Associate Administrator for Space Science and Applications

    Get PDF
    NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era

    Development of a GPGPU accelerated tool to simulate advection-reaction-diffusion phenomena in 2D

    Get PDF
    Computational models are powerful tools to the study of environmental systems, playing a fundamental role in several fields of research (hydrological sciences, biomathematics, atmospheric sciences, geosciences, among others). Most of these models require high computational capacity, especially when one considers high spatial resolution and the application to large areas. In this context, the exponential increase in computational power brought by General Purpose Graphics Processing Units (GPGPU) has drawn the attention of scientists and engineers to the development of low cost and high performance parallel implementations of environmental models. In this research, we apply GPGPU computing for the development of a model that describes the physical processes of advection, reaction and diffusion. This presentation is held in the form of three self-contained articles. In the first one, we present a GPGPU implementation for the solution of the 2D groundwater flow equation in unconfined aquifers for heterogenous and anisotropic media. We implement a finite difference solution scheme based on the Crank- Nicolson method and show that the GPGPU accelerated solution implemented using CUDA C/C++ (Compute Unified Device Architecture) greatly outperforms the corresponding serial solution implemented in C/C++. The results show that accelerated GPGPU implementation is capable of delivering up to 56 times acceleration in the solution process using an ordinary office computer. In the second article, we study the application of a diffusive-logistic growth (DLG) model to the problem of forest growth and regeneration. The study focuses on vegetation belonging to preservation areas, such as riparian buffer zones. The study was developed in two stages: (i) a methodology based on Artificial Neural Network Ensembles (ANNE) was applied to evaluate the width of riparian buffer required to filter 90% of the residual nitrogen; (ii) the DLG model was calibrated and validated to generate a prognostic of forest regeneration in riparian protection bands considering the minimum widths indicated by the ANNE. The solution was implemented in GPGPU and it was applied to simulate the forest regeneration process for forty years on the riparian protection bands along the Ligeiro river, in Brazil. The results from calibration and validation showed that the DLG model provides fairly accurate results for the modelling of forest regeneration. In the third manuscript, we present a GPGPU implementation of the solution of the advection-reaction-diffusion equation in 2D. The implementation is designed to be general and flexible to allow the modeling of a wide range of processes, including those with heterogeneity and anisotropy. We show that simulations performed in GPGPU allow the use of mesh grids containing more than 20 million points, corresponding to an area of 18,000 km? in a standard Landsat image resolution.Os modelos computacionais s?o ferramentas poderosas para o estudo de sistemas ambientais, desempenhando um papel fundamental em v?rios campos de pesquisa (ci?ncias hidrol?gicas, biomatem?tica, ci?ncias atmosf?ricas, geoci?ncias, entre outros). A maioria desses modelos requer alta capacidade computacional, especialmente quando se considera uma alta resolu??o espacial e a aplica??o em grandes ?reas. Neste contexto, o aumento exponencial do poder computacional trazido pelas Unidades de Processamento de Gr?ficos de Prop?sito Geral (GPGPU) chamou a aten??o de cientistas e engenheiros para o desenvolvimento de implementa??es paralelas de baixo custo e alto desempenho para modelos ambientais. Neste trabalho, aplicamos computa??o em GPGPU para o desenvolvimento de um modelo que descreve os processos f?sicos de advec??o, rea??o e difus?o. Esta disserta??o ? apresentada sob a forma de tr?s artigos. No primeiro, apresentamos uma implementa??o em GPGPU para a solu??o da equa??o de fluxo de ?guas subterr?neas 2D em aqu?feros n?o confinados para meios heterog?neos e anisotr?picos. Foi implementado um esquema de solu??o de diferen?as finitas com base no m?todo Crank- Nicolson e mostramos que a solu??o acelerada GPGPU implementada usando CUDA C / C ++ supera a solu??o serial correspondente implementada em C / C ++. Os resultados mostram que a implementa??o acelerada por GPGPU ? capaz de fornecer acelera??o de at? 56 vezes no processo da solu??o usando um computador de escrit?rio comum. No segundo artigo estudamos a aplica??o de um modelo de crescimento log?stico difusivo (DLG) ao problema de crescimento e regenera??o florestal. O estudo foi desenvolvido em duas etapas: (i) Aplicou-se uma metodologia baseada em Comites de Rede Neural Artificial (ANNE) para avaliar a largura da faixa de prote??o rip?ria necess?ria para filtrar 90% do nitrog?nio residual; (ii) O modelo DLG foi calibrado e validado para gerar um progn?stico de regenera??o florestal em faixas de prote??o rip?rias considerando as larguras m?nimas indicadas pela ANNE. A solu??o foi implementada em GPGPU e aplicada para simular o processo de regenera??o florestal para um per?odo de quarenta anos na faixa de prote??o rip?ria ao longo do rio Ligeiro, no Brasil. Os resultados da calibra??o e valida??o mostraram que o modelo DLG fornece resultados bastante precisos para a modelagem de regenera??o florestal. No terceiro artigo, apresenta-se uma implementa??o em GPGPU para solu??o da equa??o advec??o-rea??o-difus?o em 2D. A implementa??o ? projetada para ser geral e flex?vel para permitir a modelagem de uma ampla gama de processos, incluindo caracter?sticas como heterogeneidade e anisotropia do meio. Neste trabalho mostra-se que as simula??es realizadas em GPGPU permitem o uso de malhas contendo mais de 20 milh?es de pontos (vari?veis), correspondendo a uma ?rea de 18.000 km? em resolu??o de 30m padr?o das imagens Landsat

    Characterization and uncertainty analysis of siliciclastic aquifer-fault system

    Get PDF
    The complex siliciclastic aquifer system underneath the Baton Rouge area, Louisiana, USA, is fluvial in origin. The east-west trending Baton Rouge fault and Denham Springs-Scotlandville fault cut across East Baton Rouge Parish and play an important role in groundwater flow and aquifer salinization. To better understand the salinization underneath Baton Rouge, it is imperative to study the hydrofacies architecture and the groundwater flow field of the Baton Rogue aquifer-fault system. This is done through developing multiple detailed hydrofacies architecture models and multiple groundwater flow models of the aquifer-fault system, representing various uncertain model propositions. The hydrofacies architecture models focus on the Miocene-Pliocene depth interval that consists of the “1,200-foot” sand, “1,500-foot” sand, “1,700-foot” sand and the “2,000-foot” sand, as these aquifer units are classified and named by their approximate depth below ground level. The groundwater flow models focus only on the “2,000-foot” sand. The study reveals the complexity of the Baton Rouge aquifer-fault system where the sand deposition is non-uniform, different sand units are interconnected, the sand unit displacement on the faults is significant, and the spatial distribution of flow pathways through the faults is sporadic. The identified locations of flow pathways through the Baton Rouge fault provide useful information on possible windows for saltwater intrusion from the south. From the results we learn that the “1,200-foot” sand, “1,500-foot” sand and the “1,700-foot” sand should not be modeled separately since they are very well connected near the Baton Rouge fault, while the “2,000-foot” sand between the two faults is a separate unit. Results suggest that at the “2,000-foot” sand the Denham Springs-Scotlandville fault has much lower permeability in comparison to the Baton Rouge fault, and that the Baton Rouge fault plays an important role in the aquifer salinization
    • …
    corecore