11 research outputs found

    Scalability of a multi-physics system for forest fire spread prediction in multi-core platforms

    Get PDF
    Advances in high-performance computing have led to an improvement in modeling multi-physics systems because of the capacity to solve complex numerical systems in a reasonable time. WRF-SFIRE is a multi-physics system that couples the atmospheric model WRF and the forest fire spread model called SFIRE with the objective of considering the atmosphere-fire interactions. In systems like WRF-SFIRE, the trade-off between result accuracy and time required to deliver that result is crucial. So, in this work, we analyze the influence of WRF-SFIRE settings (grid resolutions) into the forecasts accuracy and into the execution times on multi-core platforms using OpenMP and MPI parallel programming paradigms

    Applying probability theory for the quality assessment of a wildfire spread prediction framework based on genetic algorithms

    Get PDF
    This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy) obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus

    How to use mixed precision in ocean models : Exploring a potential reduction of numerical precision in NEMO 4.0 and ROMS 3.6

    Get PDF
    ceived funding from the EU ESiWACE H2020 Framework Programme under grant agreement no. 823988, from the Severo Ochoa (SEV-2011-00067) program of the Spanish Government and from the Ministerio de Economia y Competitividad under contract TIN2017-84553-C2-1-R.Mixed-precision approaches can provide substantial speed-ups for both computing- and memory-bound codes with little effort. Most scientific codes have overengineered the numerical precision, leading to a situation in which models are using more resources than required without knowing where they are required and where they are not. Consequently, it is possible to improve computational performance by establishing a more appropriate choice of precision. The only input that is needed is a method to determine which real variables can be represented with fewer bits without affecting the accuracy of the results. This paper presents a novel method that enables modern and legacy codes to benefit from a reduction of the precision of certain variables without sacrificing accuracy. It consists of a simple idea: we reduce the precision of a group of variables and measure how it affects the outputs. Then we can evaluate the level of precision that they truly need. Modifying and recompiling the code for each case that has to be evaluated would require a prohibitive amount of effort. Instead, the method presented in this paper relies on the use of a tool called a reduced-precision emulator (RPE) that can significantly streamline the process. Using the RPE and a list of parameters containing the precisions that will be used for each real variable in the code, it is possible within a single binary to emulate the effect on the outputs of a specific choice of precision. When we are able to emulate the effects of reduced precision, we can proceed with the design of the tests that will give us knowledge of the sensitivity of the model variables regarding their numerical precision. The number of possible combinations is prohibitively large and therefore impossible to explore. The alternative of performing a screening of the variables individually can provide certain insight about the required precision of variables, but, on the other hand, other complex interactions that involve several variables may remain hidden. Instead, we use a divide-and-conquer algorithm that identifies the parts that require high precision and establishes a set of variables that can handle reduced precision. This method has been tested using two state-of-the-art ocean models, the Nucleus for European Modelling of the Ocean (NEMO) and the Regional Ocean Modeling System (ROMS), with very promising results. Obtaining this information is crucial to build an actual mixed-precision version of the code in the next phase that will bring the promised performance benefits

    Modeling air quality at urban scale in the city of Barcelona : A matter of green resolution

    Get PDF
    Altres ajuts: acords transformatius de la UABUnidad de excelencia María de Maeztu CEX2019-000940-MImprovement of the air quality in highly polluted cities is a challenge for today's society. Many of the strategies that have been proposed for this purpose promote the creation of green infrastructures. The Weather Research and Forecasting model coupled with chemistry (WRF-Chem) is used to analyze the behavior of the most common pollutants and how they are dispersed as a result of different meteorological conditions. To also consider the impact of including green infrastructures on urban morphology, the BEP/BEM (Building Effect Parameterization/Building Energy Model) multi-layer urban scheme is also included in the system. Using the city of Barcelona as a case study, this paper confirms that the modeling methodology used up to now needs to be reviewed for the design of green cities. Certain limitations of the WRF-Chem+BEP/BEM coupled model when applied in urban resolution are discussed, as well as the reasons for such limitations, being the main contribution of this paper to show that an alternative paradigm such as Machine Learning techniques should be considered to address this challenge

    Applying probability theory for the quality assessment of a wildfire spread prediction framework based on genetic algorithms

    No full text
    This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy) obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus

    Cloud-based urgent computing for forest fire spread prediction

    Get PDF
    Altres ajuts: acords transformatius de la UABForest fires cause every year damages to biodiversity, atmosphere, and economy activities. Forest fire simulation have improved significantly, but input data describing fire scenarios are subject to high levels of uncertainty. In this work the two-stage prediction scheme is used to adjust unknown parameters. This scheme relies on an input data calibration phase, which is carried over following a genetic algorithm strategy. The calibrated inputs are then pipelined into the actual prediction phase. This two-stage prediction scheme is leveraged by the cloud computing paradigm, which enables high level of parallelism on demand, elasticity, scalability and low-cost. In this paper, all the models designed to properly allocate cloud resources to the two-stage scheme in a performance-efficient and cost-effective way are described. This Cloud-based Urgent Computing (CuCo) architecture has been tested using, as study case, an extreme wildland fire that took place in California in 2018 (Camp Fire)

    How to use mixed precision in ocean models : Exploring a potential reduction of numerical precision in NEMO 4.0 and ROMS 3.6

    No full text
    ceived funding from the EU ESiWACE H2020 Framework Programme under grant agreement no. 823988, from the Severo Ochoa (SEV-2011-00067) program of the Spanish Government and from the Ministerio de Economia y Competitividad under contract TIN2017-84553-C2-1-R.Mixed-precision approaches can provide substantial speed-ups for both computing- and memory-bound codes with little effort. Most scientific codes have overengineered the numerical precision, leading to a situation in which models are using more resources than required without knowing where they are required and where they are not. Consequently, it is possible to improve computational performance by establishing a more appropriate choice of precision. The only input that is needed is a method to determine which real variables can be represented with fewer bits without affecting the accuracy of the results. This paper presents a novel method that enables modern and legacy codes to benefit from a reduction of the precision of certain variables without sacrificing accuracy. It consists of a simple idea: we reduce the precision of a group of variables and measure how it affects the outputs. Then we can evaluate the level of precision that they truly need. Modifying and recompiling the code for each case that has to be evaluated would require a prohibitive amount of effort. Instead, the method presented in this paper relies on the use of a tool called a reduced-precision emulator (RPE) that can significantly streamline the process. Using the RPE and a list of parameters containing the precisions that will be used for each real variable in the code, it is possible within a single binary to emulate the effect on the outputs of a specific choice of precision. When we are able to emulate the effects of reduced precision, we can proceed with the design of the tests that will give us knowledge of the sensitivity of the model variables regarding their numerical precision. The number of possible combinations is prohibitively large and therefore impossible to explore. The alternative of performing a screening of the variables individually can provide certain insight about the required precision of variables, but, on the other hand, other complex interactions that involve several variables may remain hidden. Instead, we use a divide-and-conquer algorithm that identifies the parts that require high precision and establishes a set of variables that can handle reduced precision. This method has been tested using two state-of-the-art ocean models, the Nucleus for European Modelling of the Ocean (NEMO) and the Regional Ocean Modeling System (ROMS), with very promising results. Obtaining this information is crucial to build an actual mixed-precision version of the code in the next phase that will bring the promised performance benefits
    corecore