7 research outputs found

    Arguments for and against the use of multiple comparison control in stochastic simulation studies

    No full text
    Pick up any of the standard discrete-event simulation textbooks and you will find that the output analysissection includes a note on multiple comparison control (MCC). These procedures aim to mitigate theproblem of inflating the probability of making a single type I error when comparing many simulatedscenarios simultaneously. We consider the use of MCC in stochastic simulation studies and present anargument discouraging its use in the classical sense. In particular, we focus on the impracticality ofprocedures, the benefits of common random numbers and that simulation is very different from empiricalstudies where MCC has its roots. We then consider in what instances would abandoning MCC altogetherbe problematic and what alternatives are available. We present an argument for medium to largeexploratory studies to move their attention away from classical Type I errors and instead control asubtlety different quantity: the rate of false positives amongst all ‘discoveries’

    Efficient Simulation Budget Allocation for Ranking the Top m

    Get PDF

    A User's Guide to the Brave New World of Designing Simulation Experiments

    Get PDF
    Many simulation practitioners can get more from their analyses by using the statistical theory on design of experiments (DOE) developed specifically for exploring computer models.In this paper, we discuss a toolkit of designs for simulationists with limited DOE expertise who want to select a design and an appropriate analysis for their computational experiments.Furthermore, we provide a research agenda listing problems in the design of simulation experiments -as opposed to real world experiments- that require more investigation.We consider three types of practical problems: (1) developing a basic understanding of a particular simulation model or system; (2) finding robust decisions or policies; and (3) comparing the merits of various decisions or policies.Our discussion emphasizes aspects that are typical for simulation, such as sequential data collection.Because the same problem type may be addressed through different design types, we discuss quality attributes of designs.Furthermore, the selection of the design type depends on the metamodel (response surface) that the analysts tentatively assume; for example, more complicated metamodels require more simulation runs.For the validation of the metamodel estimated from a specific design, we present several procedures.

    A Practical Approach to Subset Selection for Multi-objective Optimization via Simulation

    Get PDF
    This is the author accepted manuscript. The final version is available from ACM via the DOI in this recordWe describe a practical two-stage algorithm, BootComp, for multi-objective optimization via simulation. Our algorithm finds a subset of good designs that a decision-maker can compare to identify the one that works best when considering all aspects of the system, including those that cannot be modeled. BootComp is designed to be straightforward to implement by a practitioner with basic statistical knowledge in a simulation package that does not support sequential ranking and selection. These requirements restrict us to a two-stage procedure that works with any distributions of the outputs and allows for the use of common random numbers. Comparisons with sequential ranking and selection methods suggest that it performs well, and we also demonstrate its use analyzing a real simulation aiming to determine the optimal ward configuration for a UK hospital.National Institute for Health Research (NIHR

    New Procedures to Select the Best Simulated System Using Common Random Numbers

    No full text
    Although simulation is widely used to select the best of several alternative system designs, and common random numbers is an important tool for reducing the computation effort of simulation experiments, there are surprisingly few tools available to help a simulation practitioner select the best system when common random numbers are employed. This paper presents new two-stage procedures that use common random numbers to help identify the best simulated system. The procedures allow for screening and attempt to allocate additional replications to improve the value of information obtained during the second stage, rather than determining the number of replications required to provide a given probability of correct selection guarantee. The procedures allow decision makers to reduce either the expected opportunity cost associated with potentially selecting an inferior system, or the probability of incorrect selection. A small empirical study indicates that the new procedures outperform several procedures with respect to several criteria, and identifies potential areas for further improvement.Multiple Selection, Ranking and Selection, Discrete-Event Simulation, Common Random Numbers, Missing Data, Bayesian Statistics

    OPTIMAL COMPUTING BUDGET ALLOCATION FOR SIMULATION BASED OPTIMIZATION AND COMPLEX DECISION MAKING

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Evaluation and Design of Supply Chain Operations using DEA

    Get PDF
    Performance evaluation has been one of the most critical components in management. As production systems nowadays consist of a growing number of integrated and interacting processes, the interrelationship and dynamic among processes have create a major challenge in measuring system and process performance. Meanwhile, rapid information obsolescence has become a commonplace in today’s high-velocity environment. Managers therefore need to make process design decisions based on incomplete information regarding the future market. This thesis studies the above problems in the evaluation and design of complex production systems. Based on the widely used Data Envelopment Analysis models, we develop a generalized methodology to evaluate the dynamic efficiency of production networks. Our method evaluates both the supply network and its constituent firms in a systematic way. The evaluation result can help identify inefficiency in the network, which is important information for improving the network performance. Part II of the thesis covers multi-criteria process design methods developed for situations of different information availability. Our design approaches combine interdisciplinary techniques to facilitate efficient decision-making in situations with limited information and high uncertainty. As an illustration, we apply these approaches to project selection and resource allocation problems in a supply chain
    corecore