676 research outputs found

    Distributed Particle Filters for Data Assimilation in Simulation of Large Scale Spatial Temporal Systems

    Get PDF
    Assimilating real time sensor into a running simulation model can improve simulation results for simulating large-scale spatial temporal systems such as wildfire, road traffic and flood. Particle filters are important methods to support data assimilation. While particle filters can work effectively with sophisticated simulation models, they have high computation cost due to the large number of particles needed in order to converge to the true system state. This is especially true for large-scale spatial temporal simulation systems that have high dimensional state space and high computation cost by themselves. To address the performance issue of particle filter-based data assimilation, this dissertation developed distributed particle filters and applied them to large-scale spatial temporal systems. We first implemented a particle filter-based data assimilation framework and carried out data assimilation to estimate system state and model parameters based on an application of wildfire spread simulation. We then developed advanced particle routing methods in distributed particle filters to route particles among the Processing Units (PUs) after resampling in effective and efficient manners. In particular, for distributed particle filters with centralized resampling, we developed two routing policies named minimal transfer particle routing policy and maximal balance particle routing policy. For distributed PF with decentralized resampling, we developed a hybrid particle routing approach that combines the global routing with the local routing to take advantage of both. The developed routing policies are evaluated from the aspects of communication cost and data assimilation accuracy based on the application of data assimilation for large-scale wildfire spread simulations. Moreover, as cloud computing is gaining more and more popularity; we developed a parallel and distributed particle filter based on Hadoop & MapReduce to support large-scale data assimilation

    Application of particle filters to regional-scale wildfire spread

    Get PDF
    European Conference on Thermophysical Properties (ECTP)European Conference on Thermophysical Properties (ECTP), Porto, PORTUGALPorto, PORTUGAL, SEP 05-05, 2014SEP 05-05, 2014International audienceThis paper demonstrates the capability of particle filters for sequentially improving the simulation and forecast of wildfire propagation as new fire front observations become available. Particle filters, also called Sequential Monte Carlo (SMC) methods, fit into the domain of inverse modeling procedures, where measurements are incorporated (assimilated) into a computational model so as to formulate some feedback information on the uncertain model state variables and/or parameters, through representations of their probability density functions (PDF). Based on a simple sampling importance distribution and resampling techniques, particle filters combine Monte Carlo samplings with sequential Bayesian filtering problems. This study compares the performance of the Sampling Importance Resampling (SIR) and of the Auxiliary Sampling Importance Resampling (ASIR) filters for the sequential estimation of a progress variable and of vegetation parameters of the Rate Of fire Spread (ROS) model, which are all treated as state variables. They are applied to a real-world case corresponding to a reduced-scale controlled grassland fire experiment for validation; results indicate that both the SIR and the ASIR filters are able to accurately track the observed fire fronts, with a moderate computational cost. Particle filters show, therefore, their good ability to predict the propagation of controlled fires and to significantly increase fire simulation accuracy. While still at an early stage of development, this data-driven strategy is quite promising for regional-scale wildfire spread forecasting

    On the merits of sparse surrogates for global sensitivity analysis of multi-scale nonlinear problems: application to turbulence and fire-spotting model in wildland fire simulators

    Get PDF
    Many nonlinear phenomena, whose numerical simulation is not straightforward, depend on a set of parameters in a way which is not easy to predict beforehand. Wildland fires in presence of strong winds fall into this category, also due to the occurrence of firespotting. We present a global sensitivity analysis of a new sub-model for turbulence and fire-spotting included in a wildfire spread model based on a stochastic representation of the fireline. To limit the number of model evaluations, fast surrogate models based on generalized Polynomial Chaos (gPC) and Gaussian Process are used to identify the key parameters affecting topology and size of burnt area. This study investigates the application of these surrogates to compute Sobol' sensitivity indices in an idealized test case. The performances of the surrogates for varying size and type of training sets as well as for varying parameterization and choice of algorithms have been compared. In particular, different types of truncation and projection strategies are tested for gPC surrogates. The best performance was achieved using a gPC strategy based on a sparse least-angle regression (LAR) and a low-discrepancy Halton's sequence. Still, the LAR-based gPC surrogate tends to filter out the information coming from parameters with large length-scale, which is not the case of the cleaning-based gPC surrogate. The wind is known to drive the fire propagation. The results show that it is a more general leading factor that governs the generation of secondary fires. Using a sparse surrogate is thus a promising strategy to analyze new models and its dependency on input parameters in wildfire applications.This research is supported by the Basque Government through the BERC 2014–2017 and BERC 2018–2021 programs, by the Spanish Ministry of Economy and Competitiveness MINECO through BCAM Severo Ochoa accreditations SEV-2013-0323 and SEV-2017-0718 and through project MTM2016-76016-R “MIP”, and by the PhD grant “La Caixa2014”. The authors acknowledge EDF R&D for their support on the OpenTURNS library. They also acknowledge Pamphile Roy and Matthias De Lozzo at CERFACS for helpful discussions on batman and scikit-learn tools

    Data Assimilation Based on Sequential Monte Carlo Methods for Dynamic Data Driven Simulation

    Get PDF
    Simulation models are widely used for studying and predicting dynamic behaviors of complex systems. Inaccurate simulation results are often inevitable due to imperfect model and inaccurate inputs. With the advances of sensor technology, it is possible to collect large amount of real time observation data from real systems during simulations. This gives rise to a new paradigm of Dynamic Data Driven Simulation (DDDS) where a simulation system dynamically assimilates real time observation data into a running model to improve simulation results. Data assimilation for DDDS is a challenging task because sophisticated simulation models often have: 1) nonlinear non-Gaussian behavior 2) non-analytical expressions of involved probability density functions 3) high dimensional state space 4) high computation cost. Due to these properties, most existing data assimilation methods fail to effectively support data assimilation for DDDS in one way or another. This work develops algorithms and software to perform data assimilation for dynamic data driven simulation through non-parametric statistic inference based on sequential Monte Carlo (SMC) methods (also called particle filters). A bootstrap particle filter based data assimilation framework is firstly developed, where the proposal distribution is constructed from simulation models and statistical cores of noises. The bootstrap particle filter-based framework is relatively easy to implement. However, it is ineffective when the uncertainty of simulation models is much larger than the observation model (i.e. peaked likelihood) or when rare events happen. To improve the effectiveness of data assimilation, a new data assimilation framework, named as the SenSim framework, is then proposed, which has a more advanced proposal distribution that uses knowledge from both simulation models and sensor readings. Both the bootstrap particle filter-based framework and the SenSim framework are applied and evaluated in two case studies: wildfire spread simulation, and lane-based traffic simulation. Experimental results demonstrate the effectiveness of the proposed data assimilation methods. A software package is also created to encapsulate the different components of SMC methods for supporting data assimilation of general simulation models

    Data Assimilation for Spatial Temporal Simulations Using Localized Particle Filtering

    Get PDF
    As sensor data becomes more and more available, there is an increasing interest in assimilating real time sensor data into spatial temporal simulations to achieve more accurate simulation or prediction results. Particle Filters (PFs), also known as Sequential Monte Carlo methods, hold great promise in this area as they use Bayesian inference and stochastic sampling techniques to recursively estimate the states of dynamic systems from some given observations. However, PFs face major challenges to work effectively for complex spatial temporal simulations due to the high dimensional state space of the simulation models, which typically cover large areas and have a large number of spatially dependent state variables. As the state space dimension increases, the number of particles must increase exponentially in order to converge to the true system state. The purpose of this dissertation work is to develop localized particle filtering to support PFs-based data assimilation for large-scale spatial temporal simulations. We develop a spatially dependent particle-filtering framework that breaks the system state and observation data into sub-regions and then carries out localized particle filtering based on these spatial regions. The developed framework exploits the spatial locality property of system state and observation data, and employs the divide-and-conquer principle to reduce state dimension and data complexity. Within this framework, we propose a two-level automated spatial partitioning method to provide optimized and balanced spatial partitions with less boundary sensors. We also consider different types of data to effectively support data assimilation for spatial temporal simulations. These data include both hard data, which are measurements from physical devices, and soft data, which are information from messages, reports, and social network. The developed framework and methods are applied to large-scale wildfire spread simulations and achieved improved results. Furthermore, we compare the proposed framework to existing particle filtering based data assimilation frameworks and evaluate the performance for each of them

    A wildland fire model with data assimilation

    Full text link
    A wildfire model is formulated based on balance equations for energy and fuel, where the fuel loss due to combustion corresponds to the fuel reaction rate. The resulting coupled partial differential equations have coefficients that can be approximated from prior measurements of wildfires. An ensemble Kalman filter technique with regularization is then used to assimilate temperatures measured at selected points into running wildfire simulations. The assimilation technique is able to modify the simulations to track the measurements correctly even if the simulations were started with an erroneous ignition location that is quite far away from the correct one.Comment: 35 pages, 12 figures; minor revision January 2008. Original version available from http://www-math.cudenver.edu/ccm/report

    Assimilation of Perimeter Data and Coupling with Fuel Moisture in a Wildland Fire - Atmosphere DDDAS

    Get PDF
    We present a methodology to change the state of the Weather Research Forecasting (WRF) model coupled with the fire spread code SFIRE, based on Rothermel's formula and the level set method, and with a fuel moisture model. The fire perimeter in the model changes in response to data while the model is running. However, the atmosphere state takes time to develop in response to the forcing by the heat flux from the fire. Therefore, an artificial fire history is created from an earlier fire perimeter to the new perimeter, and replayed with the proper heat fluxes to allow the atmosphere state to adjust. The method is an extension of an earlier method to start the coupled fire model from a developed fire perimeter rather than an ignition point. The level set method is also used to identify parameters of the simulation, such as the spread rate and the fuel moisture. The coupled model is available from openwfm.org, and it extends the WRF-Fire code in WRF release.Comment: ICCS 2012, 10 pages; corrected some DOI typesetting in the reference
    • 

    corecore