127 research outputs found

    Assimilation of Perimeter Data and Coupling with Fuel Moisture in a Wildland Fire - Atmosphere DDDAS

    Get PDF
    We present a methodology to change the state of the Weather Research Forecasting (WRF) model coupled with the fire spread code SFIRE, based on Rothermel's formula and the level set method, and with a fuel moisture model. The fire perimeter in the model changes in response to data while the model is running. However, the atmosphere state takes time to develop in response to the forcing by the heat flux from the fire. Therefore, an artificial fire history is created from an earlier fire perimeter to the new perimeter, and replayed with the proper heat fluxes to allow the atmosphere state to adjust. The method is an extension of an earlier method to start the coupled fire model from a developed fire perimeter rather than an ignition point. The level set method is also used to identify parameters of the simulation, such as the spread rate and the fuel moisture. The coupled model is available from openwfm.org, and it extends the WRF-Fire code in WRF release.Comment: ICCS 2012, 10 pages; corrected some DOI typesetting in the reference

    Coupled atmosphere-wildland fire modeling with WRF-Fire

    Full text link
    We describe the physical model, numerical algorithms, and software structure of WRF-Fire. WRF-Fire consists of a fire-spread model, implemented by the level-set method, coupled with the Weather Research and Forecasting model. In every time step, the fire model inputs the surface wind, which drives the fire, and outputs the heat flux from the fire into the atmosphere, which in turn influences the atmosphere. The level-set method allows submesh representation of the burning region and flexible implementation of various ignition modes. WRF-Fire is distributed as a part of WRF and it uses the WRF parallel infrastructure for parallel computing.Comment: Version 3.3, 41 pages, 2 tables, 12 figures. As published in Discussions, under review for Geoscientific Model Developmen

    Dynamic Data Driven Application System for Wildfire Spread Simulation

    Get PDF
    Wildfires have significant impact on both ecosystems and human society. To effectively manage wildfires, simulation models are used to study and predict wildfire spread. The accuracy of wildfire spread simulations depends on many factors, including GIS data, fuel data, weather data, and high-fidelity wildfire behavior models. Unfortunately, due to the dynamic and complex nature of wildfire, it is impractical to obtain all these data with no error. Therefore, predictions from the simulation model will be different from what it is in a real wildfire. Without assimilating data from the real wildfire and dynamically adjusting the simulation, the difference between the simulation and the real wildfire is very likely to continuously grow. With the development of sensor technologies and the advance of computer infrastructure, dynamic data driven application systems (DDDAS) have become an active research area in recent years. In a DDDAS, data obtained from wireless sensors is fed into the simulation model to make predictions of the real system. This dynamic input is treated as the measurement to evaluate the output and adjust the states of the model, thus to improve simulation results. To improve the accuracy of wildfire spread simulations, we apply the concept of DDDAS to wildfire spread simulation by dynamically assimilating sensor data from real wildfires into the simulation model. The assimilation system relates the system model and the observation data of the true state, and uses analysis approaches to obtain state estimations. We employ Sequential Monte Carlo (SMC) methods (also called particle filters) to carry out data assimilation in this work. Based on the structure of DDDAS, this dissertation presents the data assimilation system and data assimilation results in wildfire spread simulations. We carry out sensitivity analysis for different densities, frequencies, and qualities of sensor data, and quantify the effectiveness of SMC methods based on different measurement metrics. Furthermore, to improve simulation results, the image-morphing technique is introduced into the DDDAS for wildfire spread simulation

    Data Assimilation Based on Sequential Monte Carlo Methods for Dynamic Data Driven Simulation

    Get PDF
    Simulation models are widely used for studying and predicting dynamic behaviors of complex systems. Inaccurate simulation results are often inevitable due to imperfect model and inaccurate inputs. With the advances of sensor technology, it is possible to collect large amount of real time observation data from real systems during simulations. This gives rise to a new paradigm of Dynamic Data Driven Simulation (DDDS) where a simulation system dynamically assimilates real time observation data into a running model to improve simulation results. Data assimilation for DDDS is a challenging task because sophisticated simulation models often have: 1) nonlinear non-Gaussian behavior 2) non-analytical expressions of involved probability density functions 3) high dimensional state space 4) high computation cost. Due to these properties, most existing data assimilation methods fail to effectively support data assimilation for DDDS in one way or another. This work develops algorithms and software to perform data assimilation for dynamic data driven simulation through non-parametric statistic inference based on sequential Monte Carlo (SMC) methods (also called particle filters). A bootstrap particle filter based data assimilation framework is firstly developed, where the proposal distribution is constructed from simulation models and statistical cores of noises. The bootstrap particle filter-based framework is relatively easy to implement. However, it is ineffective when the uncertainty of simulation models is much larger than the observation model (i.e. peaked likelihood) or when rare events happen. To improve the effectiveness of data assimilation, a new data assimilation framework, named as the SenSim framework, is then proposed, which has a more advanced proposal distribution that uses knowledge from both simulation models and sensor readings. Both the bootstrap particle filter-based framework and the SenSim framework are applied and evaluated in two case studies: wildfire spread simulation, and lane-based traffic simulation. Experimental results demonstrate the effectiveness of the proposed data assimilation methods. A software package is also created to encapsulate the different components of SMC methods for supporting data assimilation of general simulation models

    Distributed Particle Filters for Data Assimilation in Simulation of Large Scale Spatial Temporal Systems

    Get PDF
    Assimilating real time sensor into a running simulation model can improve simulation results for simulating large-scale spatial temporal systems such as wildfire, road traffic and flood. Particle filters are important methods to support data assimilation. While particle filters can work effectively with sophisticated simulation models, they have high computation cost due to the large number of particles needed in order to converge to the true system state. This is especially true for large-scale spatial temporal simulation systems that have high dimensional state space and high computation cost by themselves. To address the performance issue of particle filter-based data assimilation, this dissertation developed distributed particle filters and applied them to large-scale spatial temporal systems. We first implemented a particle filter-based data assimilation framework and carried out data assimilation to estimate system state and model parameters based on an application of wildfire spread simulation. We then developed advanced particle routing methods in distributed particle filters to route particles among the Processing Units (PUs) after resampling in effective and efficient manners. In particular, for distributed particle filters with centralized resampling, we developed two routing policies named minimal transfer particle routing policy and maximal balance particle routing policy. For distributed PF with decentralized resampling, we developed a hybrid particle routing approach that combines the global routing with the local routing to take advantage of both. The developed routing policies are evaluated from the aspects of communication cost and data assimilation accuracy based on the application of data assimilation for large-scale wildfire spread simulations. Moreover, as cloud computing is gaining more and more popularity; we developed a parallel and distributed particle filter based on Hadoop & MapReduce to support large-scale data assimilation

    Cellular automata simulations of field scale flaming and smouldering wildfires in peatlands

    Get PDF
    In peatland wildfires, flaming vegetation can initiate a smouldering fire by igniting the peat underneath, thus, creating a positive feedback to climate change by releasing the carbon that cannot be reabsorbed by the ecosystem. Currently, there are very few models of peatland wildfires at the field-scale, hindering the development of effective mitigation strategies. This lack of models is mainly caused by the complexity of the phenomena, which involves 3-D spread and km-scale domains, and the very large computational resources required. This thesis aims to understand field-scale peatland wildfires, considering flaming and smouldering, via cellular automata, discrete models that use simple rules. Five multidimensional models were developed: two laboratory-scale models for smouldering, BARA and BARAPPY, and three field-scale models for flaming and smouldering, KAPAS, KAPAS II, and SUBALI. The models were validated against laboratory experiments and field data. BARA accurately simulates smouldering of peat with realistic moisture distributions and predicts the formation of unburned patches. BARAPPY brings physics into BARA and predicts the depth of burn profile, but needs 240 times more computational resources. KAPAS showed that the smouldering burnt area decreases exponentially with higher peat moisture content. KAPAS II integrates daily temporal variation of moisture content, and revealed that the omission of this temporal variation significantly underestimates the smouldering burnt area in the long term. SUBALI, the ultimate model of the thesis, integrates KAPAS II with BARA and considers the ground water table to predict the carbon emission of peatland wildfires. Applying SUBALI to Indonesia, it predicts that in El Niño years, 0.40 Gt-C in 2015 (literature said 0.23 to 0.51 Gt-C) and 0.16 Gt-C in 2019 were released, and 75% of the emission is from smouldering. This thesis provides knowledge and models to understand the spread of flaming and smouldering wildfires in peatlands, which can contribute to efforts to minimise the negative impacts of peatland wildfires on people and the environment, through faster-than-real-time simulations, to find the optimum firefighting strategy and to assess the vulnerability of peatland in the event of wildfires.Open Acces

    Data Assimilation for Wildland Fires: Ensemble Kalman filters in coupled atmosphere-surface models

    Full text link
    Two wildland fire models are described, one based on reaction-diffusion-convection partial differential equations, and one based on semi-empirical fire spread by the level let method. The level set method model is coupled with the Weather Research and Forecasting (WRF) atmospheric model. The regularized and the morphing ensemble Kalman filter are used for data assimilation.Comment: Minor revision, except description of the model expanded. 29 pages, 9 figures, 53 reference
    • …
    corecore