83,111 research outputs found

    Ranking and Selection under Input Uncertainty: Fixed Confidence and Fixed Budget

    Full text link
    In stochastic simulation, input uncertainty (IU) is caused by the error in estimating the input distributions using finite real-world data. When it comes to simulation-based Ranking and Selection (R&S), ignoring IU could lead to the failure of many existing selection procedures. In this paper, we study R&S under IU by allowing the possibility of acquiring additional data. Two classical R&S formulations are extended to account for IU: (i) for fixed confidence, we consider when data arrive sequentially so that IU can be reduced over time; (ii) for fixed budget, a joint budget is assumed to be available for both collecting input data and running simulations. New procedures are proposed for each formulation using the frameworks of Sequential Elimination and Optimal Computing Budget Allocation, with theoretical guarantees provided accordingly (e.g., upper bound on the expected running time and finite-sample bound on the probability of false selection). Numerical results demonstrate the effectiveness of our procedures through a multi-stage production-inventory problem

    Economic Analysis of Simulation Selection Problems

    Get PDF
    Ranking and selection procedures are standard methods for selecting the best of a finite number of simulated design alternatives based on a desired level of statistical evidence for correct selection. But the link between statistical significance and financial significance is indirect, and there has been little or no research into it. This paper presents a new approach to the simulation selection problem, one that maximizes the expected net present value of decisions made when using stochastic simulation. We provide a framework for answering these managerial questions: When does a proposed system design, whose performance is unknown, merit the time and money needed to develop a simulation to infer its performance? For how long should the simulation analysis continue before a design is approved or rejected? We frame the simulation selection problem as a “stoppable” version of a Bayesian bandit problem that treats the ability to simulate as a real option prior to project implementation. For a single proposed system, we solve a free boundary problem for a heat equation that approximates the solution to a dynamic program that finds optimal simulation project stopping times and that answers the managerial questions. For multiple proposed systems, we extend previous Bayesian selection procedures to account for discounting and simulation-tool development costs

    Modified Selection Mechanisms Designed to Help Evolution Strategies Cope with Noisy Response Surfaces

    Get PDF
    With the rise in the application of evolution strategies for simulation optimization, a better understanding of how these algorithms are affected by the stochastic output produced by simulation models is needed. At very high levels of stochastic variance in the output, evolution strategies in their standard form experience difficulty locating the optimum. The degradation of the performance of evolution strategies in the presence of very high levels of variation can be attributed to the decrease in the proportion of correctly selected solutions as parents from which offspring solutions are generated. The proportion of solutions correctly selected as parents can be increased by conducting additional replications for each solution. However, experimental evaluation suggests that a very high proportion of correctly selected solutions as parents is not required. A proportion of correctly selected solutions of around 0.75 seems sufficient for evolution strategies to perform adequately. Integrating statistical techniques into the algorithm?s selection process does help evolution strategies cope with high levels of noise. There are four categories of techniques: statistical ranking and selection techniques, multiple comparison procedures, clustering techniques, and other techniques. Experimental comparison of indifference zone selection procedure by Dudewicz and Dalal (1975), sequential procedure by Kim and Nelson (2001), Tukey?s Procedure, clustering procedure by Calsinki and Corsten (1985), and Scheffe?s procedure (1985) under similar conditions suggests that the sequential ranking and selection procedure by Kim and Nelson (2001) helps evolution strategies cope with noise using the smallest number of replications. However, all of the techniques required a rather large number of replications, which suggests that better methods are needed. Experimental results also indicate that a statistical procedure is especially required during the later generations when solutions are spaced closely together in the search space (response surface)

    Online Appendix for “Gradient-Based Myopic Allocation Policy: An Efficient Sampling Procedure in a Low-Confidence Scenario”

    Get PDF
    This is the online appendix, which includes theoretical and numerical supplements containing some technical details and three additional numerical examples, which could not fit in the main body due to page limits by the journal for a technical note. The abstract for the main body is as follows: In this note, we study a simulation optimization problem of selecting the alternative with the best performance from a finite set, or a so-called ranking and selection problem, in a special low-confidence scenario. The most popular sampling allocation procedures in ranking and selection do not perform well in this scenario, because they all ignore certain induced correlations that significantly affect the probability of correct selection in this scenario. We propose a gradient-based myopic allocation policy (G-MAP) that takes the induced correlations into account, reflecting a trade-off between the induced correlation and the two factors (mean-variance) found in the optimal computing budget allocation formula. Numerical experiments substantiate the efficiency of the new procedure in the low-confidence scenario.This work was supported in part by the National Science Foundation (NSF) under Grants CMMI-0856256, CMMI- 1362303, CMMI-1434419, by the National Natural Science Foundation of China (NSFC) under Grants 71571048, by the Air Force of Scientific Research (AFOSR) under Grant FA9550-15-10050, and by the Science and Technology Agency of Sichuan Province under Grant 2014GZX0002

    One stage multiple comparisons with the average for exponential location parameters under heteroscedasticity

    Get PDF
    [[abstract]]Two-stage multiple comparisons with the average for location parameters of two-parameter exponential distributions under heteroscedasticity are proposed by Wu and Wu [Wu, S.F., Wu, C.C., 2005. Two stage multiple comparisons with the average for exponential location parameters under heteroscedasticity. Journal of Statistical Planning and Inference 134, 392–408]. When the additional sample for the second stage may not be available, one-stage procedures including one-sided and two-sided confidence intervals are proposed in this paper. These intervals can be used to identify a subset which includes all no-worse-than-the-average treatments in an experimental design and to identify better-than-the-average, worse-than-the-average and not-much-different-from-the-average products in agriculture, the stock market, pharmaceutical industries. Tables of upper limits of critical values are obtained using the technique given in Lam [Lam, K., 1987. Subset selection of normal populations under heteroscedasticity. In: Proceedings of the Second International Advanced Seminar/Workshop on Inference Procedures Associated with Statistical Ranking and Selection. Sydney, Australia. August 1987. Lam, K., 1988. An improved two-stage selection procedure. Communications in Statistics—Simulation and Computation 17 (3), 995–1006]. An example of comparing four drugs in the treatment of leukemia is given to demonstrate the proposed procedures. The relationship between the one-stage and the two-stage procedures is also elaborated in this paper.[[notice]]補正完畢[[incitationindex]]SCI[[booktype]]紙本[[booktype]]電子

    Asymptotic Validity of the Bayes-Inspired Indifference Zone Procedure: The Non-Normal Known Variance Case

    Full text link
    We consider the indifference-zone (IZ) formulation of the ranking and selection problem in which the goal is to choose an alternative with the largest mean with guaranteed probability, as long as the difference between this mean and the second largest exceeds a threshold. Conservatism leads classical IZ procedures to take too many samples in problems with many alternatives. The Bayes-inspired Indifference Zone (BIZ) procedure, proposed in Frazier (2014), is less conservative than previous procedures, but its proof of validity requires strong assumptions, specifically that samples are normal, and variances are known with an integer multiple structure. In this paper, we show asymptotic validity of a slight modification of the original BIZ procedure as the difference between the best alternative and the second best goes to zero,when the variances are known and finite, and samples are independent and identically distributed, but not necessarily normal

    Multiobjective simulation-based methodologies for medical decision making.

    Get PDF
    A variety of methodologies have been employed for decision making related to the treatment of diseases/injury. Decision trees are a functional way in which to examine problems under uncertainty by providing a method to analyze decisions under risk (Detsky, 1996, 97). However, conventional decision trees do not completely represent the real world since they cannot investigate problems that are cyclic in nature (Jaafari, 2003). The stochastic tree that developed Hazen during 1992-to-1996 is one of the most relevant methods and techniques related to decision analyses that append more incorporation for medical intervention related to recurring diseases/injuries. The approach combines features of continuous-time Markov chains with those of decision trees and that enable time to be modeled as a range where health state transitions can occur at any instant (Hazen 1992-to-96). It can also accommodate patients\u27 preferences regarding risk and quality of life. In this research we enhance Hazen\u27s stochastic tree by developing an analytical model, and we extend its capabilities more by developing multi-objective simulation based methodologists for medical decision making. First, with our enhancement on the Hazen\u27s stochastic tree, the model is improved by utilizing the Weibull Accelerated Failure Time model. This new technique will fill the gap between the experimental circumstances and the corresponding circumstances or conditions of standard/current treatment. Second, as simulation can be a final alternative for problems that are mathematically intractable for other techniques (Banks 1996), our multi-objective simulation based model for medical decision making extends the capabilities of Hazen stochastic tree. It adds more flexibility with the use of survival distributions for health states sojourn, and combines two sound theories: multi attribute utility (MAU) theory, and Ranking-Selection procedures. Indeed, our simulation model (considering patient\u27s profile/preferences and health states survival/quality/cost, QALY) presents an investigation of the use of simulation on the stochastic tree, with associated techniques related to ranking and selection, and multi-objectives decision analysis
    corecore