13 research outputs found

    Response Surface Methodology's Steepest Ascent and Step Size Revisited

    Get PDF
    Response Surface Methodology (RSM) searches for the input combination maximizing the output of a real system or its simulation.RSM is a heuristic that locally fits first-order polynomials, and estimates the corresponding steepest ascent (SA) paths.However, SA is scale-dependent; and its step size is selected intuitively.To tackle these two problems, this paper derives novel techniques combining mathematical statistics and mathematical programming.Technique 1 called 'adapted' SA (ASA) accounts for the covariances between the components of the estimated local gradient.ASA is scale-independent.The step-size problem is solved tentatively.Technique 2 does follow the SA direction, but with a step size inspired by ASA.Mathematical properties of the two techniques are derived and interpreted; numerical examples illustrate these properties.The search directions of the two techniques are explored in Monte Carlo experiments.These experiments show that - in general - ASA gives a better search direction than SA.response surface methodology

    Response surface methodology revisited

    Get PDF

    Statistical Testing of Optimality Conditions in Multiresponse Simulation-based Optimization (Revision of 2005-81)

    Get PDF
    This paper studies simulation-based optimization with multiple outputs. It assumes that the simulation model has one random objective function and must satisfy given constraints on the other random outputs. It presents a statistical procedure for test- ing whether a specific input combination (proposed by some optimization heuristic) satisfies the Karush-Kuhn-Tucker (KKT) first-order optimality conditions. The pa- per focuses on "expensive" simulations, which have small sample sizes. The paper applies the classic t test to check whether the specific input combination is feasi- ble, and whether any constraints are binding; it applies bootstrapping (resampling) to test the estimated gradients in the KKT conditions. The new methodology is applied to three examples, which gives encouraging empirical results.Stopping rule;metaheuristics;response surface methodology;design of experiments

    Statistical Testing of Optimality Conditions in Multiresponse Simulation-Based Optimization (Replaced by Discussion Paper 2007-45)

    Get PDF
    This paper derives a novel procedure for testing the Karush-Kuhn-Tucker (KKT) first-order optimality conditions in models with multiple random responses.Such models arise in simulation-based optimization with multivariate outputs.This paper focuses on expensive simulations, which have small sample sizes.The paper estimates the gradients (in the KKT conditions) through low-order polynomials, fitted locally.These polynomials are estimated using Ordinary Least Squares (OLS), which also enables estimation of the variability of the estimated gradients.Using these OLS results, the paper applies the bootstrap (resampling) method to test the KKT conditions.Furthermore, it applies the classic Student t test to check whether the simulation outputs are feasible, and whether any constraints are binding.The paper applies the new procedure to both a synthetic example and an inventory simulation; the empirical results are encouraging.stopping rule;metaheuristics;RSM;design of experiments

    Statistical Testing of Optimality Conditions in Multiresponse Simulation-Based Optimization (Replaced by Discussion Paper 2007-45)

    Get PDF
    This paper derives a novel procedure for testing the Karush-Kuhn-Tucker (KKT) first-order optimality conditions in models with multiple random responses.Such models arise in simulation-based optimization with multivariate outputs.This paper focuses on expensive simulations, which have small sample sizes.The paper estimates the gradients (in the KKT conditions) through low-order polynomials, fitted locally.These polynomials are estimated using Ordinary Least Squares (OLS), which also enables estimation of the variability of the estimated gradients.Using these OLS results, the paper applies the bootstrap (resampling) method to test the KKT conditions.Furthermore, it applies the classic Student t test to check whether the simulation outputs are feasible, and whether any constraints are binding.The paper applies the new procedure to both a synthetic example and an inventory simulation; the empirical results are encouraging.

    Multiobjective Simulation Optimization Using Enhanced Evolutionary Algorithm Approaches

    Get PDF
    In today\u27s competitive business environment, a firm\u27s ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or noisy ) values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, black-box objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms\u27 performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications

    Black box simulation optimization:Generalized response surface methodology

    Get PDF
    The thesis consists of three papers in the area of Response Surface Methodology (RSM). The first paper deals with optimization problems with a single random objective. The contributions of that paper are a scale independent search direction and possible solutions for the step size. The second paper considers optimization problems with a single random objective and multiple random constraints, as well as deterministic box constraints. That paper extends RSM to cope with constraints, using ideas from interior point methods. Furthermore, it provides a heuristic to reach quickly a neighborhood of the optimum. The third paper copes with optimization problems with a single random objective and multiple random constraints. The contribution of that paper is an asymptotic stopping rule that tests the first-order necessary optimality conditions at a feasible point.
    corecore