2 research outputs found

    Ranking and Selection: A New Sequential Bayesian Procedure for Use with Common Random Numbers

    Full text link
    We want to select the best systems out of a given set of systems (or rank them) with respect to their expected performance. The systems allow random observations only and we assume that the joint observation of the systems has a multivariate normal distribution with unknown mean and covariance. We allow dependent marginal observations as they occur when common random numbers are used for the simulation of the systems. In particular, we focus on positively dependent observations as they might be expected in heuristic optimization where `systems' are different solutions to an optimization problem with common random inputs. In each iteration, we allocate a fixed budget of simulation runs to the solutions. We use a Bayesian setup and allocate the simulation effort according to the posterior covariances of the solutions until the ranking and selection decision is correct with a given high probability. Here, the complex posterior distributions are approximated only but we give extensive empirical evidence that the observed error probabilities are well below the given bounds in most cases. We also use a generalized scheme for the target of the ranking and selection that allows to bound the error probabilities with a Bonferroni approach. Our test results show that our procedure uses less simulations than comparable procedures from literature even in most of the cases where the observations are not positively correlated.Comment: 28 pages, 11 figures, extended discussion of literature, improved arguments in section 2.5 on approximate distribution, extended empirical comparison

    Lookahead and Hybrid Sample Allocation Procedures for Multiple Attribute Selection Decisions

    Full text link
    Attributes provide critical information about the alternatives that a decision-maker is considering. When their magnitudes are uncertain, the decision-maker may be unsure about which alternative is truly the best, so measuring the attributes may help the decision-maker make a better decision. This paper considers settings in which each measurement yields one sample of one attribute for one alternative. When given a fixed number of samples to collect, the decision-maker must determine which samples to obtain, make the measurements, update prior beliefs about the attribute magnitudes, and then select an alternative. This paper presents the sample allocation problem for multiple attribute selection decisions and proposes two sequential, lookahead procedures for the case in which discrete distributions are used to model the uncertain attribute magnitudes. The two procedures are similar but reflect different quality measures (and loss functions), which motivate different decision rules: (1) select the alternative with the greatest expected utility and (2) select the alternative that is most likely to be the truly best alternative. We conducted a simulation study to evaluate the performance of the sequential procedures and hybrid procedures that first allocate some samples using a uniform allocation procedure and then use the sequential, lookahead procedure. The results indicate that the hybrid procedures are effective; allocating many (but not all) of the initial samples with the uniform allocation procedure not only reduces overall computational effort but also selects alternatives that have lower average opportunity cost and are more often truly best.Comment: Pages: 49. Figures:
    corecore