20 research outputs found

    On the merits of sparse surrogates for global sensitivity analysis of multi-scale nonlinear problems: application to turbulence and fire-spotting model in wildland fire simulators

    Get PDF
    Many nonlinear phenomena, whose numerical simulation is not straightforward, depend on a set of parameters in a way which is not easy to predict beforehand. Wildland fires in presence of strong winds fall into this category, also due to the occurrence of firespotting. We present a global sensitivity analysis of a new sub-model for turbulence and fire-spotting included in a wildfire spread model based on a stochastic representation of the fireline. To limit the number of model evaluations, fast surrogate models based on generalized Polynomial Chaos (gPC) and Gaussian Process are used to identify the key parameters affecting topology and size of burnt area. This study investigates the application of these surrogates to compute Sobol' sensitivity indices in an idealized test case. The performances of the surrogates for varying size and type of training sets as well as for varying parameterization and choice of algorithms have been compared. In particular, different types of truncation and projection strategies are tested for gPC surrogates. The best performance was achieved using a gPC strategy based on a sparse least-angle regression (LAR) and a low-discrepancy Halton's sequence. Still, the LAR-based gPC surrogate tends to filter out the information coming from parameters with large length-scale, which is not the case of the cleaning-based gPC surrogate. The wind is known to drive the fire propagation. The results show that it is a more general leading factor that governs the generation of secondary fires. Using a sparse surrogate is thus a promising strategy to analyze new models and its dependency on input parameters in wildfire applications.This research is supported by the Basque Government through the BERC 2014–2017 and BERC 2018–2021 programs, by the Spanish Ministry of Economy and Competitiveness MINECO through BCAM Severo Ochoa accreditations SEV-2013-0323 and SEV-2017-0718 and through project MTM2016-76016-R “MIP”, and by the PhD grant “La Caixa2014”. The authors acknowledge EDF R&D for their support on the OpenTURNS library. They also acknowledge Pamphile Roy and Matthias De Lozzo at CERFACS for helpful discussions on batman and scikit-learn tools

    Ensemble-based data assimilation for operational flood forecasting – On the merits of state estimation for 1D hydrodynamic forecasting through the example of the “Adour Maritime” river

    Get PDF
    This study presents the implementation and the merits of an Ensemble Kalman Filter (EnKF) algorithm with an inflation procedure on the 1D shallow water model MASCARET in the framework of operational flood forecasting on the “Adour Maritime” river (South West France). In situ water level observations are sequentially assimilated to correct both water level and discharge. The stochastic estimation of the background error statistics is achieved over an ensemble of MASCARET integrations with perturbed hydrological boundary conditions. It is shown that the geometric characteristics of the network as well as the hydrological forcings and their temporal variability have a significant impact on the shape of the univariate (water level) and multivariate (water level and discharge) background error covariance functions and thus on the EnKF analysis. The performance of the EnKF algorithm is examined for observing system simulation experiments as well as for a set of eight real flood events (2009–2014). The quality of the ensemble is deemed satisfactory as long as the forecast lead time remains under the transfer time of the network, when perfect hydrological forcings are considered. Results demonstrate that the simulated hydraulic state variables can be improved over the entire network, even where no data are available, with a limited ensemble size and thus a computational cost compatible with operational constraints. The improvement in the water level Root-Mean-Square Error obtained with the EnKF reaches up to 88% at the analysis time and 40% at a 4-h forecast lead time compared to the standalone model

    Orthorectification of helicopter-borne high resolution experimental burn observation from infra red handheld imagers

    Get PDF
    To pursue the development and validation of coupled fire-atmosphere models, the wildland fire modeling community needs validation data sets with scenarios where fire-induced winds influence fire front behavior, and with high temporal and spatial resolution. Helicopter-borne infrared thermal cameras have the potential to monitor landscape-scale wildland fires at a high resolution during experimental burns. To extract valuable information from those observations, three-step image processing is required: (a) Orthorectification to warp raw images on a fixed coordinate system grid, (b) segmentation to delineate the fire front location out of the orthorectified images, and (c) computation of fire behavior metrics such as the rate of spread from the time-evolving fire front location. This work is dedicated to the first orthorectification step, and presents a series of algorithms that are designed to process handheld helicopter-borne thermal images collected during savannah experimental burns. The novelty in the approach lies on its recursive design, which does not require the presence of fixed ground control points, hence relaxing the constraint on field of view coverage and helping the acquisition of high-frequency observations. For four burns ranging from four to eight hectares, long-wave and mid infra red images were collected at 1 and 3 Hz, respectively, and orthorectified at a high spatial resolution (<1 m) with an absolute accuracy estimated to be lower than 4 m. Subsequent computation of fire radiative power is discussed with comparison to concurrent space-borne measurementsPeer ReviewedPostprint (published version

    Polynomial surrogates for open-channel flows in random steady state

    Get PDF
    Assessing epistemic uncertainties is considered as a milestone for improving numerical predictions of a dynamical system. In hydrodynamics, uncertainties in input parameters translate into uncertainties in simulated water levels through the shallow water equations. We investigate the ability of generalized polynomial chaos (gPC) surrogate to evaluate the probabilistic features of water level simulated by a 1-D hydraulic model (MASCARET) with the same accuracy as a classical Monte Carlo method but at a reduced computational cost. This study highlights that the water level probability density function and covariance matrix are better estimated with the polynomial surrogate model than with a Monte Carlo approach on the forward model given a limited budget of MASCARET evaluations. The gPC-surrogate performance is first assessed on an idealized channel with uniform geometry and then applied on the more realistic case of the Garonne River (France) for which a global sensitivity analysis using sparse least-angle regression was performed to reduce the size of the stochastic problem. For both cases, Galerkin projection approximation coupled to Gaussian quadrature that involves a limited number of forward model evaluations is compared with least-square regression for computing the coefficients when the surrogate is parameterized with respect to the local friction coefficient and the upstream discharge. The results showed that a gPC-surrogate with total polynomial degree equal to 6 requiring 49 forward model evaluations is sufficient to represent the water level distribution (in the sense of the â„“2 norm), the probability density function and the water level covariance matrix for further use in the framework of data assimilation. In locations where the flow dynamics is more complex due to bathymetry, a higher polynomial degree is needed to retrieve the water level distribution. The use of a surrogate is thus a promising strategy for uncertainty quantification studies in open-channel flows and should be extended to unsteady flows. It also paves the way toward cost-effective ensemble-based data assimilation for flood forecasting and water resource management

    An upper non-reflecting boundary condition for atmospheric compressible flow

    No full text
    International audienc

    Metamodelling for micro-scale atmospheric pollutant dispersion large-eddy simulation

    No full text
    International audienceIn atmospheric dispersion problems, mapping pollutant concentrations within the first tens or hundreds of meters from the emission point source still remains a modelling challenge. Computational fluid dynamics (CFD) approaches provide relevant insights into turbulent flow and pollutant concentration patterns in complex terrain such as urban and mountainous areas. At the forefront of CFD approaches, large-eddy simulations (LES) are a promising way to represent time and space variability of turbulent atmospheric flows and to assess public short-term exposures. LES are subject to uncertainties due to the intrinsic variability of environmental factors, among whom the large-scale meteorological forcing and the emission source characteristics
    corecore