934 research outputs found

    Assimilation of Perimeter Data and Coupling with Fuel Moisture in a Wildland Fire - Atmosphere DDDAS

    Get PDF
    We present a methodology to change the state of the Weather Research Forecasting (WRF) model coupled with the fire spread code SFIRE, based on Rothermel's formula and the level set method, and with a fuel moisture model. The fire perimeter in the model changes in response to data while the model is running. However, the atmosphere state takes time to develop in response to the forcing by the heat flux from the fire. Therefore, an artificial fire history is created from an earlier fire perimeter to the new perimeter, and replayed with the proper heat fluxes to allow the atmosphere state to adjust. The method is an extension of an earlier method to start the coupled fire model from a developed fire perimeter rather than an ignition point. The level set method is also used to identify parameters of the simulation, such as the spread rate and the fuel moisture. The coupled model is available from openwfm.org, and it extends the WRF-Fire code in WRF release.Comment: ICCS 2012, 10 pages; corrected some DOI typesetting in the reference

    Application of particle filters to regional-scale wildfire spread

    Get PDF
    European Conference on Thermophysical Properties (ECTP)European Conference on Thermophysical Properties (ECTP), Porto, PORTUGALPorto, PORTUGAL, SEP 05-05, 2014SEP 05-05, 2014International audienceThis paper demonstrates the capability of particle filters for sequentially improving the simulation and forecast of wildfire propagation as new fire front observations become available. Particle filters, also called Sequential Monte Carlo (SMC) methods, fit into the domain of inverse modeling procedures, where measurements are incorporated (assimilated) into a computational model so as to formulate some feedback information on the uncertain model state variables and/or parameters, through representations of their probability density functions (PDF). Based on a simple sampling importance distribution and resampling techniques, particle filters combine Monte Carlo samplings with sequential Bayesian filtering problems. This study compares the performance of the Sampling Importance Resampling (SIR) and of the Auxiliary Sampling Importance Resampling (ASIR) filters for the sequential estimation of a progress variable and of vegetation parameters of the Rate Of fire Spread (ROS) model, which are all treated as state variables. They are applied to a real-world case corresponding to a reduced-scale controlled grassland fire experiment for validation; results indicate that both the SIR and the ASIR filters are able to accurately track the observed fire fronts, with a moderate computational cost. Particle filters show, therefore, their good ability to predict the propagation of controlled fires and to significantly increase fire simulation accuracy. While still at an early stage of development, this data-driven strategy is quite promising for regional-scale wildfire spread forecasting

    Data Assimilation for Spatial Temporal Simulations Using Localized Particle Filtering

    Get PDF
    As sensor data becomes more and more available, there is an increasing interest in assimilating real time sensor data into spatial temporal simulations to achieve more accurate simulation or prediction results. Particle Filters (PFs), also known as Sequential Monte Carlo methods, hold great promise in this area as they use Bayesian inference and stochastic sampling techniques to recursively estimate the states of dynamic systems from some given observations. However, PFs face major challenges to work effectively for complex spatial temporal simulations due to the high dimensional state space of the simulation models, which typically cover large areas and have a large number of spatially dependent state variables. As the state space dimension increases, the number of particles must increase exponentially in order to converge to the true system state. The purpose of this dissertation work is to develop localized particle filtering to support PFs-based data assimilation for large-scale spatial temporal simulations. We develop a spatially dependent particle-filtering framework that breaks the system state and observation data into sub-regions and then carries out localized particle filtering based on these spatial regions. The developed framework exploits the spatial locality property of system state and observation data, and employs the divide-and-conquer principle to reduce state dimension and data complexity. Within this framework, we propose a two-level automated spatial partitioning method to provide optimized and balanced spatial partitions with less boundary sensors. We also consider different types of data to effectively support data assimilation for spatial temporal simulations. These data include both hard data, which are measurements from physical devices, and soft data, which are information from messages, reports, and social network. The developed framework and methods are applied to large-scale wildfire spread simulations and achieved improved results. Furthermore, we compare the proposed framework to existing particle filtering based data assimilation frameworks and evaluate the performance for each of them

    Distributed Particle Filters for Data Assimilation in Simulation of Large Scale Spatial Temporal Systems

    Get PDF
    Assimilating real time sensor into a running simulation model can improve simulation results for simulating large-scale spatial temporal systems such as wildfire, road traffic and flood. Particle filters are important methods to support data assimilation. While particle filters can work effectively with sophisticated simulation models, they have high computation cost due to the large number of particles needed in order to converge to the true system state. This is especially true for large-scale spatial temporal simulation systems that have high dimensional state space and high computation cost by themselves. To address the performance issue of particle filter-based data assimilation, this dissertation developed distributed particle filters and applied them to large-scale spatial temporal systems. We first implemented a particle filter-based data assimilation framework and carried out data assimilation to estimate system state and model parameters based on an application of wildfire spread simulation. We then developed advanced particle routing methods in distributed particle filters to route particles among the Processing Units (PUs) after resampling in effective and efficient manners. In particular, for distributed particle filters with centralized resampling, we developed two routing policies named minimal transfer particle routing policy and maximal balance particle routing policy. For distributed PF with decentralized resampling, we developed a hybrid particle routing approach that combines the global routing with the local routing to take advantage of both. The developed routing policies are evaluated from the aspects of communication cost and data assimilation accuracy based on the application of data assimilation for large-scale wildfire spread simulations. Moreover, as cloud computing is gaining more and more popularity; we developed a parallel and distributed particle filter based on Hadoop & MapReduce to support large-scale data assimilation

    Dynamic Data Driven Application System for Wildfire Spread Simulation

    Get PDF
    Wildfires have significant impact on both ecosystems and human society. To effectively manage wildfires, simulation models are used to study and predict wildfire spread. The accuracy of wildfire spread simulations depends on many factors, including GIS data, fuel data, weather data, and high-fidelity wildfire behavior models. Unfortunately, due to the dynamic and complex nature of wildfire, it is impractical to obtain all these data with no error. Therefore, predictions from the simulation model will be different from what it is in a real wildfire. Without assimilating data from the real wildfire and dynamically adjusting the simulation, the difference between the simulation and the real wildfire is very likely to continuously grow. With the development of sensor technologies and the advance of computer infrastructure, dynamic data driven application systems (DDDAS) have become an active research area in recent years. In a DDDAS, data obtained from wireless sensors is fed into the simulation model to make predictions of the real system. This dynamic input is treated as the measurement to evaluate the output and adjust the states of the model, thus to improve simulation results. To improve the accuracy of wildfire spread simulations, we apply the concept of DDDAS to wildfire spread simulation by dynamically assimilating sensor data from real wildfires into the simulation model. The assimilation system relates the system model and the observation data of the true state, and uses analysis approaches to obtain state estimations. We employ Sequential Monte Carlo (SMC) methods (also called particle filters) to carry out data assimilation in this work. Based on the structure of DDDAS, this dissertation presents the data assimilation system and data assimilation results in wildfire spread simulations. We carry out sensitivity analysis for different densities, frequencies, and qualities of sensor data, and quantify the effectiveness of SMC methods based on different measurement metrics. Furthermore, to improve simulation results, the image-morphing technique is introduced into the DDDAS for wildfire spread simulation

    Procedure for improving wildfire simulations using observations

    Get PDF
    This report suggests a variational update method for improving wildfire simulations using observations as feedback to update information. We first assume a onedimensional fire model for simplicity and present numerical simulations obtained in this case. As possible alternative approaches, we also discuss two other update methods: a particle filter method and an optimal control method

    Massively parallel implicit equal-weights particle filter for ocean drift trajectory forecasting

    Get PDF
    Forecasting of ocean drift trajectories are important for many applications, including search and rescue operations, oil spill cleanup and iceberg risk mitigation. In an operational setting, forecasts of drift trajectories are produced based on computationally demanding forecasts of three-dimensional ocean currents. Herein, we investigate a complementary approach for shorter time scales by using the recently proposed two-stage implicit equal-weights particle filter applied to a simplified ocean model. To achieve this, we present a new algorithmic design for a data-assimilation system in which all components – including the model, model errors, and particle filter – take advantage of massively parallel compute architectures, such as graphical processing units. Faster computations can enable in-situ and ad-hoc model runs for emergency management, and larger ensembles for better uncertainty quantification. Using a challenging test case with near-realistic chaotic instabilities, we run data-assimilation experiments based on synthetic observations from drifting and moored buoys, and analyze the trajectory forecasts for the drifters. Our results show that even sparse drifter observations are sufficient to significantly improve short-term drift forecasts up to twelve hours. With equidistant moored buoys observing only 0.1% of the state space, the ensemble gives an accurate description of the true state after data assimilation followed by a high-quality probabilistic forecast
    • …
    corecore