20 research outputs found

    Monotonicity-Preserving Bootstrapped Kriging Metamodels for Expensive Simulations

    Get PDF
    Kriging (Gaussian process, spatial correlation) metamodels approximate the Input/Output (I/O) functions implied by the underlying simulation models; such metamodels serve sensitivity analysis and optimization, especially for computationally expensive simulations. In practice, simulation analysts often know that the I/O function is monotonic. To obtain a Kriging metamodel that preserves this known shape, this article uses bootstrapping (or resampling). Parametric bootstrapping assuming normality may be used in deterministic simulation, but this article focuses on stochastic simulation (including discrete-event simulation) using distribution-free bootstrapping. In stochastic simulation, the analysts should simulate each input combination several times to obtain a more reliable average output per input combination. Nevertheless, this average still shows sampling variation, so the Kriging metamodel does not need to interpolate the average outputs. Bootstrapping provides a simple method for computing a noninterpolating Kriging model. This method may use standard Kriging software, such as the free Matlab toolbox called DACE. The method is illustrated through the M/M/1 simulation model with as outputs either the estimated mean or the estimated 90% quantile; both outputs are monotonic functions of the traffic rate, and have nonnormal distributions. The empirical results demonstrate that monotonicity-preserving bootstrapped Kriging may give higher probability of covering the true simulation output, without lengthening the confidence interval.Queues

    Monotonicity-preserving bootstrapped kriging metamodels for expensive simulations

    Get PDF
    Kriging metamodels (also called Gaussian process or spatial correlation models) approximate the Input/Output functions implied by the underlying simulation models. Such metamodels serve sensitivity analysis, especially for computationally expensive simulations. In practice, simulation analysts often know that this Input/Output function is monotonic. To obtain a Kriging metamodel that preserves this characteristic, this article uses distribution-free bootstrapping assuming each input combination is simulated several times to obtain more reliable averaged outputs. Nevertheless, these averages still show sampling variation, so the Kriging metamodel does not need to be an exact interpolator; bootstrapping gives a noninterpolating Kriging metamodel. Bootstrapping may use standard Kriging software. The method is illustrated through the popular M/M/1 model with either the mean or the 90% quantile as output; these outputs are monotonic functions of the traffic rate. The empirical results demonstrate that monotonicity-preserving bootstrapped Kriging gives higher probability of covering the true outputs, without lengthening the confidence interval

    Simulation-Optimization via Kriging and Bootstrapping:A Survey (Revision of CentER DP 2011-064)

    Get PDF
    Abstract: This article surveys optimization of simulated systems. The simulation may be either deterministic or random. The survey reflects the author’s extensive experience with simulation-optimization through Kriging (or Gaussian process) metamodels. The analysis of these metamodels may use parametric bootstrapping for deterministic simulation or distribution-free bootstrapping (or resampling) for random simulation. The survey covers: (1) Simulation-optimization through "efficient global optimization" (EGO) using "expected improvement" (EI); this EI uses the Kriging predictor variance, which can be estimated through parametric bootstrapping accounting for estimation of the Kriging parameters. (2) Optimization with constraints for multiple random simulation outputs and deterministic inputs through mathematical programming applied to Kriging metamodels validated through distribution-free bootstrapping. (3) Taguchian robust optimization for uncertain environments, using mathematical programming— applied to Kriging metamodels— and distribution- free bootstrapping to estimate the variability of the Kriging metamodels and the resulting robust solution. (4) Bootstrapping for improving convexity or preserving monotonicity of the Kriging metamodel.

    Robust optimal design of FOPID controller for five bar linkage robot in a cyber-physical system: a new simulation-optimization approach

    Get PDF
    This paper aims to further increase the reliability of optimal results by setting the simulation conditions to be as close as possible to the real or actual operation to create a Cyber-Physical System (CPS) view for the installation of the Fractional-Order PID (FOPID) controller. For this purpose, we consider two different sources of variability in such a CPS control model. The first source refers to the changeability of a target of the control model (multiple setpoints) because of environmental noise factors and the second source refers to an anomaly in sensors that is raised in a feedback loop. We develop a new approach to optimize two objective functions under uncertainty including signal energy control and response error control while obtaining the robustness among the source of variability with the lowest computational cost. A new hybrid surrogate-metaheuristic approach is developed using Particle Swarm Optimization (PSO) to update the Gaussian Process (GP) surrogate for a sequential improvement of the robust optimal result. The application of efficient global optimization is extended to estimate surrogate prediction error with less computational cost using a jackknife leave-one-out estimator. This paper examines the challenges of such a robust multi-objective optimization for FOPID control of a five-bar linkage robot manipulator. The results show the applicability and effectiveness of our proposed method in obtaining robustness and reliability in a CPS control system by tackling required computational efforts

    Uncertainty-Integrated Surrogate Modeling for Complex System Optimization

    Get PDF
    Approximation models such as surrogate models provide a tractable substitute to expensive physical simulations and an effective solution to the potential lack of quantitative models of system behavior. These capabilities not only enable the efficient design of complex systems, but is also essential for the effective analysis of physical phenomena/characteristics in the different domains of Engineering, Material Science, Biomedical Science, and various other disciplines. Since these models provide an abstraction of the real system behavior (often a low-fidelity representative) it is important to quantify the accuracy and the reliability of such approximation models without investing additional expensive system evaluations (simulations or physical experiments). Standard error measures, such as the mean squared error, the cross-validation error, and the Akaike\u27s information criterion however provide limited (often inadequate) information regarding the accuracy of the final surrogate model while other more effective dedicated error measures are tailored towards only one class of surrogate models. This lack of accuracy information and the ability to compare and test diverse surrogate models reduce the confidence in model application, restricts appropriate model selection, and undermines the effectiveness of surrogate-based optimization. A key contribution of this dissertation is the development of a new model-independent approach to quantify the fidelity of a trained surrogate model in a given region of the design domain. This method is called the Predictive Estimation of Model Fidelity (PEMF). The PEMF method is derived from the hypothesis that the accuracy of an approximation model is related to the amount of data resources leveraged to train the model . In PEMF, intermediate surrogate models are iteratively constructed over heuristic subsets of sample points. The median and the maximum errors estimated over the remaining points are used to determine the respective error distributions at each iteration. The estimated modes of the error distributions are represented as functions of the density of intermediate training points through nonlinear regression, assuming a smooth decreasing trend of errors with increasing sample density. These regression functions are then used to predict the expected median and maximum errors in the final surrogate models. It is observed that the model fidelities estimated by PEMF are up to two orders of magnitude more accurate and statistically more stable compared to those based on the popularly-used leave-one-out cross-validation method, when applied to a variety of benchmark problems. By leveraging this new paradigm in quantifying the fidelity of surrogate models, a novel automated surrogate model selection framework is also developed. This PEMF-based model selection framework is called the Concurrent Surrogate Model Selection (COSMOS). COSMOS, unlike existing model selection methods, coherently operates at all the three levels necessary to facilitate optimal selection, i.e., (1) selecting the model type, (2) selecting the kernel function type, and (3) determining the optimal values of the typically user-prescribed parameters. The selection criteria that guide optimal model selection are determined by PEMF and the search process is performed using a MINLP solver. The effectiveness of COSMOS is demonstrated by successfully applying it to different benchmark and practical engineering problems, where it offers a first-of-its-kind globally competitive model selection. In this dissertation, the knowledge about the accuracy of a surrogate estimated using PEMF is applied to also develop a novel model management approach for engineering optimization. This approach adaptively selects computational models (both physics-based models and surrogate models) of differing levels of fidelity and computational cost, to be used during optimization, with the overall objective to yield optimal designs with high-fidelity function estimates at a reasonable computational expense. In this technique, a new adaptive model switching (AMS) metric defined to guide the switching of model from one to the next higher fidelity model during the optimization process. The switching criterion is based on whether the uncertainty associated with the current model output dominates the latest improvement of the relative fitness function, where both the model output uncertainty and the function improvement (across the population) are expressed as probability distributions. This adaptive model switching technique is applied to two practical problems through Particle Swarm Optimization to successfully illustrate: (i) the computational advantage of this method over purely high-fidelity model-based optimization, and (ii) the accuracy advantage of this method over purely low-fidelity model-based optimization. Motivated by the unique capabilities of the model switching concept, a new model refinement approach is also developed in this dissertation. The model refinement approach can be perceived as an adaptive sequential sampling approach applied in surrogate-based optimization. Decisions regarding when to perform additional system evaluations to refine the model is guided by the same model-uncertainty principles as in the adaptive model switching technique. The effectiveness of this new model refinement technique is illustrated through application to practical surrogate-based optimization in the area of energy sustainability

    Gaussian processes with linear operator inequality constraints

    Full text link
    This paper presents an approach for constrained Gaussian Process (GP) regression where we assume that a set of linear transformations of the process are bounded. It is motivated by machine learning applications for high-consequence engineering systems, where this kind of information is often made available from phenomenological knowledge. We consider a GP ff over functions on XRn\mathcal{X} \subset \mathbb{R}^{n} taking values in R\mathbb{R}, where the process Lf\mathcal{L}f is still Gaussian when L\mathcal{L} is a linear operator. Our goal is to model ff under the constraint that realizations of Lf\mathcal{L}f are confined to a convex set of functions. In particular, we require that aLfba \leq \mathcal{L}f \leq b, given two functions aa and bb where a<ba < b pointwise. This formulation provides a consistent way of encoding multiple linear constraints, such as shape-constraints based on e.g. boundedness, monotonicity or convexity. We adopt the approach of using a sufficiently dense set of virtual observation locations where the constraint is required to hold, and derive the exact posterior for a conjugate likelihood. The results needed for stable numerical implementation are derived, together with an efficient sampling scheme for estimating the posterior process.Comment: Published in JMLR: http://jmlr.org/papers/volume20/19-065/19-065.pd

    Design and analysis of computer experiments for stochastic systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Grid-enabled adaptive surrugate modeling for computer aided engineering

    Get PDF
    corecore