152,106 research outputs found

    Resource Allocation Framework: Validation of Numerical Models of Complex Engineering Systems against Physical Experiments

    Get PDF
    An increasing reliance on complex numerical simulations for high consequence decision making is the motivation for experiment-based validation and uncertainty quantification to assess, and when needed, to improve the predictive capabilities of numerical models. Uncertainties and biases in model predictions can be reduced by taking two distinct actions: (i) increasing the number of experiments in the model calibration process, and/or (ii) improving the physics sophistication of the numerical model. Therefore, decision makers must select between further code development and experimentation while allocating the finite amount of available resources. This dissertation presents a novel framework to assist in this selection between experimentation and code development for model validation strictly from the perspective of predictive capability. The reduction and convergence of discrepancy bias between model prediction and observation, computed using a suitable convergence metric, play a key role in the conceptual formulation of the framework. The proposed framework is demonstrated using two non-trivial case study applications on the Preston-Tonks-Wallace (PTW) code, which is a continuum-based plasticity approach to modeling metals, and the ViscoPlastic Self-Consistent (VPSC) code which is a mesoscopic plasticity approach to modeling crystalline materials. Results show that the developed resource allocation framework is effective and efficient in path selection (i.e. experimentation and/or code development) resulting in a reduction in both model uncertainties and discrepancy bias. The framework developed herein goes beyond path selection in the validation of numerical models by providing a methodology for the prioritization of optimal experimental settings and an algorithm for prioritization of code development. If the path selection algorithm selects the experimental path, optimal selection of the settings at which these physical experiments are conducted as well as the sequence of these experiments is vital to maximize the gain in predictive capability of a model. The Batch Sequential Design (BSD) is a methodology utilized in this work to achieve the goal of selecting the optimal experimental settings. A new BSD selection criterion, Coverage Augmented Expected Improvement for Predictive Stability (C-EIPS), is developed to minimize the maximum reduction in the model discrepancy bias and coverage of the experiments within the domain of applicability. The functional form of the new criterion, C-EIPS, is demonstrated to outperform its predecessor, the EIPS criterion, and the distance-based criterion when discrepancy bias is high and coverage is low, while exhibiting a comparable performance to the distance-based criterion in efficiently maximizing the predictive capability of the VPSC model as discrepancy decreases and coverage increases. If the path selection algorithm selects the code development path, the developed framework provides an algorithm for the prioritization of code development efforts. In coupled systems, the predictive accuracy of the simulation hinges on the accuracy of individual constituent models. Potential improvement in the predictive accuracy of the simulation that can be gained through improving a constituent model depends not only on the relative importance, but also on the inherent uncertainty and inaccuracy of that particular constituent. As such, a unique and quantitative code prioritization index (CPI) is proposed to accomplish the task of prioritizing code development efforts, and its application is demonstrated on a case study of a steel frame with semi-rigid connections. Findings show that the CPI is effective in identifying the most critical constituent of the coupled system, whose improvement leads to the highest overall enhancement of the predictive capability of the coupled model

    Development and Evaluation of Plant Growth Models: Methodology and Implementation in the PYGMALION platform

    Get PDF
    International audienceMathematical models of plant growth are generally characterized by a large number of interacting processes, a large number of model parameters and costly experimental data acquisition. Such complexities make model parameterization a difficult process. Moreover, there is a large variety of models that coexist in the literature with generally an absence of benchmarking between the different approaches and insufficient model evaluation. In this context, this paper aims at enhancing good modelling practices in the plant growth modeling community and at increasing model design efficiency. It gives an overview of the different steps in modelling and specify them in the case of plant growth models specifically regarding their above mentioned characteristics. Different methods allowing to perform these steps are implemented in a dedicated platform PYGMALION (Plant Growth Model Analysis, Identification and Optimization). Some of these methods are original. The C++ platform proposes a framework in which stochastic or deterministic discrete dynamic models can be implemented, and several efficient methods for sensitivity analysis, uncertainty analysis, parameter estimation, model selection or data assimilation can be used for model design, evaluation or application. Finally, a new model, the LNAS model for sugar beet growth, is presented and serves to illustrate how the different methods in PYGMALION can be used for its parameterization, its evaluation and its application to yield prediction. The model is evaluated from real data and is shown to have interesting predictive capacities when coupled with data assimilation techniques

    Assessment of chicken breast shelf life based on bench-top and portable near-infrared spectroscopy tools coupled with chemometrics

    Get PDF
    Abstract Objectives Near-infrared (NIR) spectroscopy is a rapid technique able to assess meat quality even if its capability to determine the shelf life of chicken fresh cuts is still debated, especially for portable devices. The aim of the study was to compare bench-top and portable NIR instruments in discriminating between four chicken breast refrigeration times (RT), coupled with multivariate classifier models. Materials and Methods Ninety-six samples were analysed by both NIR tools at 2, 6, 10 and 14 days post mortem. NIR data were subsequently submitted to partial least squares discriminant analysis (PLS-DA) and canonical discriminant analysis (CDA). The latter was preceded by double feature selection based on Boruta and Stepwise procedures. Results PLS-DA sorted moderate separation of RT theses, while shelf life assessment was more accurate on application of Stepwise-CDA. Bench-top tool had better performance than portable one, probably because it captured more informative spectral data as shown by the variable importance in projection (VIP) and restricted pool of Stepwise-CDA predictive scores (SPS). Conclusions NIR tools coupled with a multivariate model provide deep insight into the physicochemical processes occurring during storage. Spectroscopy showed reliable effectiveness to recognise a 7-day shelf life threshold of breasts, suitable for routine at-line application for screening of meat quality

    Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling

    Get PDF
    Identifying a coupled dynamical system out of many plausible candidates, each of which could serve as the underlying generator of some observed measurements, is a profoundly ill posed problem that commonly arises when modelling real world phenomena. In this review, we detail a set of statistical procedures for inferring the structure of nonlinear coupled dynamical systems (structure learning), which has proved useful in neuroscience research. A key focus here is the comparison of competing models of (ie, hypotheses about) network architectures and implicit coupling functions in terms of their Bayesian model evidence. These methods are collectively referred to as dynamical casual modelling (DCM). We focus on a relatively new approach that is proving remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid evaluation and comparison of models that differ in their network architecture. We illustrate the usefulness of these techniques through modelling neurovascular coupling (cellular pathways linking neuronal and vascular systems), whose function is an active focus of research in neurobiology and the imaging of coupled neuronal systems

    Feedback methods for inverse simulation of dynamic models for engineering systems applications

    Get PDF
    Inverse simulation is a form of inverse modelling in which computer simulation methods are used to find the time histories of input variables that, for a given model, match a set of required output responses. Conventional inverse simulation methods for dynamic models are computationally intensive and can present difficulties for high-speed applications. This paper includes a review of established methods of inverse simulation,giving some emphasis to iterative techniques that were first developed for aeronautical applications. It goes on to discuss the application of a different approach which is based on feedback principles. This feedback method is suitable for a wide range of linear and nonlinear dynamic models and involves two distinct stages. The first stage involves design of a feedback loop around the given simulation model and, in the second stage, that closed-loop system is used for inversion of the model. Issues of robustness within closed-loop systems used in inverse simulation are not significant as there are no plant uncertainties or external disturbances. Thus the process is simpler than that required for the development of a control system of equivalent complexity. Engineering applications of this feedback approach to inverse simulation are described through case studies that put particular emphasis on nonlinear and multi-input multi-output models

    A comparison of polynomial and wavelet expansions for the identification of chaotic coupled map lattices

    Get PDF
    A comparison between polynomial and wavelet expansions for the identification of coupled map lattice (CML) models for deterministic spatio-temporal dynamical systems is presented in this paper. The pattern dynamics generated by smooth and non-smooth nonlinear maps in a well-known 2-dimensional CML structure are analysed. By using an orthogonal feedforward regression algorithm (OFR), polynomial and wavelet models are identified for the CML’s in chaotic regimes. The quantitative dynamical invariants such as the largest Lyapunov exponents and correlation dimensions are estimated and used to evaluate the performance of the identified models
    • …
    corecore