1,192 research outputs found

    A finite element based formulation for sensitivity studies of piezoelectric systems

    Get PDF
    Sensitivity Analysis is a branch of numerical analysis which aims to quantify the affects that variability in the parameters of a numerical model have on the model output. A finite element based sensitivity analysis formulation for piezoelectric media is developed here and implemented to simulate the operational and sensitivity characteristics of a piezoelectric based distributed mode actuator (DMA). The work acts as a starting point for robustness analysis in the DMA technology

    Probabilistic simulation for the certification of railway vehicles

    Get PDF
    The present dynamic certification process that is based on experiments has been essentially built on the basis of experience. The introduction of simulation techniques into this process would be of great interest. However, an accurate simulation of complex, nonlinear systems is a difficult task, in particular when rare events (for example, unstable behaviour) are considered. After analysing the system and the currently utilized procedure, this paper proposes a method to achieve, in some particular cases, a simulation-based certification. It focuses on the need for precise and representative excitations (running conditions) and on their variable nature. A probabilistic approach is therefore proposed and illustrated using an example. First, this paper presents a short description of the vehicle / track system and of the experimental procedure. The proposed simulation process is then described. The requirement to analyse a set of running conditions that is at least as large as the one tested experimentally is explained. In the third section, a sensitivity analysis to determine the most influential parameters of the system is reported. Finally, the proposed method is summarized and an application is presented

    A relative entropy rate method for path space sensitivity analysis of stationary complex stochastic dynamics

    Get PDF
    We propose a new sensitivity analysis methodology for complex stochastic dynamics based on the Relative Entropy Rate. The method becomes computationally feasible at the stationary regime of the process and involves the calculation of suitable observables in path space for the Relative Entropy Rate and the corresponding Fisher Information Matrix. The stationary regime is crucial for stochastic dynamics and here allows us to address the sensitivity analysis of complex systems, including examples of processes with complex landscapes that exhibit metastability, non-reversible systems from a statistical mechanics perspective, and high-dimensional, spatially distributed models. All these systems exhibit, typically non-gaussian stationary probability distributions, while in the case of high-dimensionality, histograms are impossible to construct directly. Our proposed methods bypass these challenges relying on the direct Monte Carlo simulation of rigorously derived observables for the Relative Entropy Rate and Fisher Information in path space rather than on the stationary probability distribution itself. We demonstrate the capabilities of the proposed methodology by focusing here on two classes of problems: (a) Langevin particle systems with either reversible (gradient) or non-reversible (non-gradient) forcing, highlighting the ability of the method to carry out sensitivity analysis in non-equilibrium systems; and, (b) spatially extended Kinetic Monte Carlo models, showing that the method can handle high-dimensional problems

    Sensitivity analysis methods for uncertainty budgeting in system design

    Get PDF
    Quantification and management of uncertainty are critical in the design of engineering systems, especially in the early stages of conceptual design. This paper presents an approach to defining budgets on the acceptable levels of uncertainty in design quantities of interest, such as the allowable risk in not meeting a critical design constraint and the allowable deviation in a system performance metric. A sensitivity-based method analyzes the effects of design decisions on satisfying those budgets, and a multi-objective optimization formulation permits the designer to explore the tradespace of uncertainty reduction activities while also accounting for a cost budget. For models that are computationally costly to evaluate, a surrogate modeling approach based on high dimensional model representation (HDMR) achieves efficient computation of the sensitivities. An example problem in aircraft conceptual design illustrates the approach.United States. National Aeronautics and Space Administration. Leading Edge Aeronautics Research Program (Grant NNX14AC73A)United States. Department of Energy. Applied Mathematics Program (Award DE-FG02-08ER2585)United States. Department of Energy. Applied Mathematics Program (Award DE-SC0009297

    Derivative based global sensitivity measures

    Full text link
    The method of derivative based global sensitivity measures (DGSM) has recently become popular among practitioners. It has a strong link with the Morris screening method and Sobol' sensitivity indices and has several advantages over them. DGSM are very easy to implement and evaluate numerically. The computational time required for numerical evaluation of DGSM is generally much lower than that for estimation of Sobol' sensitivity indices. This paper presents a survey of recent advances in DGSM concerning lower and upper bounds on the values of Sobol' total sensitivity indices S_itotS\_{i}^{tot}. Using these bounds it is possible in most cases to get a good practical estimation of the values of S_itotS\_{i}^{tot} . Several examples are used to illustrate an application of DGSM

    Open TURNS: An industrial software for uncertainty quantification in simulation

    Full text link
    The needs to assess robust performances for complex systems and to answer tighter regulatory processes (security, safety, environmental control, and health impacts, etc.) have led to the emergence of a new industrial simulation challenge: to take uncertainties into account when dealing with complex numerical simulation frameworks. Therefore, a generic methodology has emerged from the joint effort of several industrial companies and academic institutions. EDF R&D, Airbus Group and Phimeca Engineering started a collaboration at the beginning of 2005, joined by IMACS in 2014, for the development of an Open Source software platform dedicated to uncertainty propagation by probabilistic methods, named OpenTURNS for Open source Treatment of Uncertainty, Risk 'N Statistics. OpenTURNS addresses the specific industrial challenges attached to uncertainties, which are transparency, genericity, modularity and multi-accessibility. This paper focuses on OpenTURNS and presents its main features: openTURNS is an open source software under the LGPL license, that presents itself as a C++ library and a Python TUI, and which works under Linux and Windows environment. All the methodological tools are described in the different sections of this paper: uncertainty quantification, uncertainty propagation, sensitivity analysis and metamodeling. A section also explains the generic wrappers way to link openTURNS to any external code. The paper illustrates as much as possible the methodological tools on an educational example that simulates the height of a river and compares it to the height of a dyke that protects industrial facilities. At last, it gives an overview of the main developments planned for the next few years

    Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology

    Get PDF
    International audienceIn this paper, we discuss the problem of calibration and uncertainty estimation for hydrologic systems from two points of view: a bottom-up, reductionist approach; and a top-down, data-based mechanistic (DBM) approach. The two approaches are applied to the modelling of the River Hodder catchment in North-West England. The bottom-up approach is developed using the TOPMODEL, whose structure is evaluated by global sensitivity analysis (GSA) in order to specify the most sensitive and important parameters; and the subsequent exercises in calibration and validation are carried out in the light of this sensitivity analysis. GSA helps to improve the calibration of hydrological models, making their properties more transparent and highlighting mis-specification problems. The DBM model provides a quick and efficient analysis of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. TOPMODEL calibration takes more time and it explains the flow data a little less well than the DBM model. The main differences in the modelling results are in the nature of the models and the flow decomposition they suggest. The "quick'' (63%) and "slow'' (37%) components of the decomposed flow identified in the DBM model show a clear partitioning of the flow, with the quick component apparently accounting for the effects of surface and near surface processes; and the slow component arising from the displacement of groundwater into the river channel (base flow). On the other hand, the two output flow components in TOPMODEL have a different physical interpretation, with a single flow component (95%) accounting for both slow (subsurface) and fast (surface) dynamics, while the other, very small component (5%) is interpreted as an instantaneous surface runoff generated by rainfall falling on areas of saturated soil. The results of the exercise show that the two modelling methodologies have good synergy; combining well to produce a complete modelling approach that has the kinds of checks-and-balances required in practical data-based modelling of rainfall-flow systems. Such a combined approach also produces models that are suitable for different kinds of application. As such, the DBM model can provides an immediate vehicle for flow and flood forecasting; while TOPMODEL, suitably calibrated (and perhaps modified) in the light of the DBM and GSA results, immediately provides a simulation model with a variety of potential applications, in areas such as catchment management and planning

    [i]In silico[/i] system analysis of physiological traits determining grain yield and protein concentration for wheat as influenced by climate and crop management

    Get PDF
    Genetic improvement of grain yield (GY) and grain protein concentration (GPC) is impeded by large genotype×environment×management interactions and by compensatory effects between traits. Here global uncertainty and sensitivity analyses of the process-based wheat model SiriusQuality2 were conducted with the aim of identifying candidate traits to increase GY and GPC. Three contrasted European sites were selected and simulations were performed using long-term weather data and two nitrogen (N) treatments in order to quantify the effect of parameter uncertainty on GY and GPC under variable environments. The overall influence of all 75 plant parameters of SiriusQuality2 was first analysed using the Morris method. Forty-one influential parameters were identified and their individual (first-order) and total effects on the model outputs were investigated using the extended Fourier amplitude sensitivity test. The overall effect of the parameters was dominated by their interactions with other parameters. Under high N supply, a few influential parameters with respect to GY were identified (e.g. radiation use efficiency, potential duration of grain filling, and phyllochron). However, under low N, >10 parameters showed similar effects on GY and GPC. All parameters had opposite effects on GY and GPC, but leaf and stem N storage capacity appeared as good candidate traits to change the intercept of the negative relationship between GY and GPC. This study provides a system analysis of traits determining GY and GPC under variable environments and delivers valuable information to prioritize model development and experimental work
    corecore