67 research outputs found

    Efficient Estimation of Sensitivity Indices

    Get PDF
    In this paper we address the problem of efficient estimation of Sobol sensitivy indices. First, we focus on general functional integrals of conditional moments of the form \E(\psi(\E(\varphi(Y)|X))) where (X,Y)(X,Y) is a random vector with joint density ff and ψ\psi and φ\varphi are functions that are differentiable enough. In particular, we show that asymptotical efficient estimation of this functional boils down to the estimation of crossed quadratic functionals. An efficient estimate of first-order sensitivity indices is then derived as a special case. We investigate its properties on several analytical functions and illustrate its interest on a reservoir engineering case.Comment: 41 page

    Local Polynomial Estimation for Sensitivity Analysis on Models With Correlated Inputs

    Get PDF
    Sensitivity indices when the inputs of a model are not independent are estimated by local polynomial techniques. Two original estimators based on local polynomial smoothers are proposed. Both have good theoretical properties which are exhibited and also illustrated through analytical examples. They are used to carry out a sensitivity analysis on a real case of a kinetic model with correlated parameters.Comment: 12 page

    A derivative free optimization method for reservoir characterization inverse problem

    Get PDF
    International audienceThese data (pressure, oil/water/gas rates at the wells and 4D seismic data) are compared with simulated data to determine petrophysical properties of the reservoir. The underlying optimization problem requires dedicated techniques : derivatives are often not available, the associated forward problems are CPU time consuming and some constraints may be introduced to handle a priori information. In this paper, we propose a derivative free optimization method based on trust region approach coupled with local quadratic interpolating models of the cost function and of non linear constraints. Results obtained with this method on a synthetic reservoir application with the joint inversion of production data and 4D seismic data are presented. Its performances are compared with a classical sequential quadratic programming method in terms of numbers of simulation of the forward problem

    Constrained nonlinear optimization for extreme scenarii evaluation in reservoir characterization.

    Get PDF
    International audienceThe goal of reservoir characterization is the estimation of unknown reservoir parameters (the history matching problem), by integrating available data in order to take decisions for production scheme and to predict the oil production of the field in the future (the forecast problem). The reservoir parameters could be classified in two classes: ‱ those related to the geological modeling (spatial distribution of porosity, permeability, faults), ‱ and those related to the fluid flow modeling (relative permeability curves, productivity index of the wells). Those parameters could not be directly determined by measurements (or only locally using well logs), this is the reason why this parameter estimation problem is formulated as an inverse problem with forward simulators that compute synthetic measurable data from those parameters. Observed data are well data acquired at production/injection wells (bottom-hole pressure, gas-oil ratio, oil rate) at different calendar times during the production of the field. The main contribution of this work is the integration of nonlinear optimization methodology to predict the oil production of a field and to give a confidence interval on this prediction. We believe that applying non linear optimization methods will increase accuracy and then give more reliable production forecast than approaches with simplified models of forward operators (linear approximations or response surfaces). The first and second sections of this paper are respectively dedicated to the history matching problem and to the forecast problem. In the third section, we described the optimization methods used to solve both problems. Then, in the last section the previous methodology is applied to a 3D synthetic reservoir application (the PUNQ test case)

    SIRUS: Making Random Forests Interpretable

    Get PDF
    State-of-the-art learning algorithms, such as random forests or neural networks, are often qualified as "black-boxes" because of the high number and complexity of operations involved in their prediction mechanism. This lack of interpretability is a strong limitation for applications involving critical decisions, typically the analysis of production processes in the manufacturing industry. In such critical contexts, models have to be interpretable, i.e., simple, stable, and predictive. To address this issue, we design SIRUS (Stable and Interpretable RUle Set), a new classification algorithm based on random forests, which takes the form of a short list of rules. While simple models are usually unstable with respect to data perturbation, SIRUS achieves a remarkable stability improvement over cutting-edge methods. Furthermore, SIRUS inherits a predictive accuracy close to random forests, combined with the simplicity of decision trees. These properties are assessed both from a theoretical and empirical point of view, through extensive numerical experiments based on our R/C++ software implementation sirus available from CRAN

    A Derivative Free Optimization method for reservoir characterization inverse problem

    Get PDF
    International audienceReservoir characterization inverse problem aims at building reservoir models consistent with available production and seismic data for better forecasting of the production of a field. These observed data (pressures, oil/water/gas rates at the wells and 4D seismic data) are compared with simulated data to determine unknown petrophysical properties of the reservoir. The underlying optimization problem is usually formulated as the minimization of a least-squares objective function composed of two terms : the production data and the seismic data mismatch. In practice, this problem is often solved by nonlinear optimization methods, such as Sequential Quadratic Programming methods with derivatives approximated by finite differences. In applications involving 4D seismic data, the use of the classical Gauss-Newton algorithm is often infeasible because the computation of the Jacobian matrix is CPU time consuming and its storage is impossible for large datasets like seismic-related ones. Consequently, this optimization problem requires dedicated techniques: derivatives are not available, the associated forward problems are CPU time consuming and some constraints may be introduced to handle a priori information. We propose a derivative free optimization method under constraints based on trust region approach coupled with local quadratic interpolating models of the cost function and of non linear constraints. Results obtained with this method on a synthetic reservoir application with the joint inversion of production data and 4D seismic data are presented. Its performance is compared with a classical SQP method (quasi-Newton approach based on classical BFGS approximation of the Hessian of the objective function with derivatives approximated by finite differences) in terms of number of simulations of the forward problem

    Gaussian process regression with Sliced Wasserstein Weisfeiler-Lehman graph kernels

    Full text link
    Supervised learning has recently garnered significant attention in the field of computational physics due to its ability to effectively extract complex patterns for tasks like solving partial differential equations, or predicting material properties. Traditionally, such datasets consist of inputs given as meshes with a large number of nodes representing the problem geometry (seen as graphs), and corresponding outputs obtained with a numerical solver. This means the supervised learning model must be able to handle large and sparse graphs with continuous node attributes. In this work, we focus on Gaussian process regression, for which we introduce the Sliced Wasserstein Weisfeiler-Lehman (SWWL) graph kernel. In contrast to existing graph kernels, the proposed SWWL kernel enjoys positive definiteness and a drastic complexity reduction, which makes it possible to process datasets that were previously impossible to handle. The new kernel is first validated on graph classification for molecular datasets, where the input graphs have a few tens of nodes. The efficiency of the SWWL kernel is then illustrated on graph regression in computational fluid dynamics and solid mechanics, where the input graphs are made up of tens of thousands of nodes

    Efficient estimation of conditional covariance matrices for dimension reduction

    Get PDF
    We consider the problem of estimating a conditional covariance matrix in an inverse regression setting. We show that this estimation can be achieved by estimating a quadratic functional extending the results of \citet{daveiga2008efficient}. We prove that this method provides a new efficient estimator whose asymptotic properties are studied

    Simplified stress analysis of hybrid (bolted/bonded) joints

    Get PDF
    The load transfer in hybrid (bolted/bonded) – denoted HBB – single-lap joints is complicated due to the association of two different transfer modes (discrete and continuous) through elements with different stiffnesses. The Finite Element (FE) method can be used to address the stress analysis of those joints. However, analyses based on FE models are computationally expensive and it would be profitable to use simplified approaches enabling extensive parametric studies. Two among the authors of this paper participated in the development of a dedicated 1D-beam approach (Paroissien 2007). This paper presents an extension of this framework enabling (i) the analysis of HBB joints made of dissimilar laminated or monolithic adherends, and (ii) the introduction of non linear material behaviour for both the adhesive layer and the fasteners. The output data are the distributions of displacements and forces in the adherends and fasteners, as well as those of adhesive shear and peeling stresses, allowing for a fast assessment of the material behaviour and strength prediction of HBB joints. The use of this model is illustrated in the identification of the failure mechanisms of HBB joints under quasistatic loadings, based on experimental and numerical tests on single-lap HBB joint. It is worth mentioning that the model can support pure bonded and pure bolted configurations. It can be used during the presizing phase at the design office (possibly independently on commercial software), to obtain quickly mechanical performances and to help in decision making. Moreover, it was shown that the judicious choice of the adhesive material allows for a significant increase of the static and fatigue strength compared to pure bolted or bonded corresponding configurations (Kelly 2006) (Paroissien 2006). The model can then be used to formulate at best adhesive materials to optimize the mechanical performance of HBB joints according to work specifications
    • 

    corecore