4,452 research outputs found

    Applications of stochastic simulation in two-stage multiple comparisons with the best problem and time average variance constant estimation

    Get PDF
    In this dissertation, we study two problems. In the first part, we consider the two-stage methods for comparing alternatives using simulation. Suppose there are a finite number of alternatives to compare, with each alternative having an unknown parameter that is the basis for comparison. The parameters are to be estimated using simulation, where the alternatives are simulated independently. We develop two-stage selection and multiple-comparison procedures for simulations under a general framework. The assumptions are that each alternative has a parameter estimation process that satisfies a random- time-change central limit theorem (CLT), and there is a weakly consistent variance estimator (WCVE) for the variance constant appearing in the CLT. The framework encompasses comparing means of independent populations, functions of means, and steady-state means. One problem we consider of considerable practical interest and not handled in previous work on two-stage multiple-comparison procedures is comparing quantiles of alternative populations. We establish the asymptotic validity of our procedures as the prescribed width of the confidence intervals or indifference-zone parameter shrinks to zero. Also, for the steady-state simulation context, we compare our procedures based on WCVEs with techniques that instead use standardized time series methods. In the second part, we propose a new technique of estimating the variance parameter of a wide variety of stochastic processes. This new technique is better than the existing techniques for some standard stochastic processes in terms of bias and variance properties, since it reduces bias at the cost of no significant increase in variance

    Ranking and Selection under Input Uncertainty: Fixed Confidence and Fixed Budget

    Full text link
    In stochastic simulation, input uncertainty (IU) is caused by the error in estimating the input distributions using finite real-world data. When it comes to simulation-based Ranking and Selection (R&S), ignoring IU could lead to the failure of many existing selection procedures. In this paper, we study R&S under IU by allowing the possibility of acquiring additional data. Two classical R&S formulations are extended to account for IU: (i) for fixed confidence, we consider when data arrive sequentially so that IU can be reduced over time; (ii) for fixed budget, a joint budget is assumed to be available for both collecting input data and running simulations. New procedures are proposed for each formulation using the frameworks of Sequential Elimination and Optimal Computing Budget Allocation, with theoretical guarantees provided accordingly (e.g., upper bound on the expected running time and finite-sample bound on the probability of false selection). Numerical results demonstrate the effectiveness of our procedures through a multi-stage production-inventory problem

    Recent Developments in the Econometrics of Program Evaluation

    Get PDF
    Many empirical questions in economics and other social sciences depend on causal effects of programs or policies. In the last two decades much research has been done on the econometric and statistical analysis of the effects of such programs or treatments. This recent theoretical literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization and other areas of empirical micro-economics. In this review we discuss some of the recent developments. We focus primarily on practical issues for empirical researchers, as well as provide a historical overview of the area and give references to more technical research.program evaluation, causality, unconfoundedness, Rubin Causal Model, potential outcomes, instrumental variables

    Recent developments in the econometrics of program evaluation

    Get PDF
    Many empirical questions in economics and other social sciences depend on causal effects of programs or policies. In the last two decades much research has been done on the econometric and statistical analysis of the effects of such programs or treatments. This recent theoretical literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization and other areas of empirical micro-economics. In this review we discuss some of the recent developments. We focus primarily on practical issues for empirical researchers, as well as provide a historical overview of the area and give references to more technical research.

    Testing the Correlated Random Coefficient Model

    Get PDF
    The recent literature on instrumental variables (IV) features models in which agents sort into treatment status on the basis of gains from treatment as well as on baseline-pretreatment levels. Components of the gains known to the agents and acted on by them may not be known by the observing economist. Such models are called correlated random coefficient models. Sorting on unobserved components of gains complicates the interpretation of what IV estimates. This paper examines testable implications of the hypothesis that agents do not sort into treatment based on gains. In it, we develop new tests to gauge the empirical relevance of the correlated random coefficient model to examine whether the additional complications associated with it are required. We examine the power of the proposed tests. We derive a new representation of the variance of the instrumental variable estimator for the correlated random coefficient model. We apply the methods in this paper to the prototypical empirical problem of estimating the return to schooling and find evidence of sorting into schooling based on unobserved components of gains.instrumental variables, testing, correlated random coefficient, power of tests based on IV

    Robust MM-Estimation and Inference in Mixed Linear Models

    Get PDF
    Mixed linear models are used to analyse data in many settings. These models generally rely on the normality assumption and are often fitted by means of the maximum likelihood estimator (MLE) or the restricted maximum likelihood estimator (REML). However, the sensitivity of these estimation techniques and related tests to this underlying assumption has been identified as a weakness that can even lead to wrong interpretations. Recently Copt and Victoria-Feser(2005) proposed a high breakdown estimator, namely an S-estimator, for general mixed linear models. It has the advantage of being easy to compute - even for highly structured variance matrices - and allow the computation of a robust score test. However this proposal cannot be used to define a likelihood ratio type test which is certainly the most direct route to robustify an F-test. As the latter is usually a key tool to test hypothesis in mixed linear models, we propose two new robust estimators that allow the desired extension. They also lead to resistant Wald-type tests useful for testing contrasts and covariate efects. We study their properties theoretically and by means of simulations. An analysis of a real data set illustrates the advantage of the new approach in the presence of outlying observations.

    Testing the correlated random coefficient model

    Get PDF
    The recent literature on instrumental variables (IV) features models in which agents sort into treatment status on the basis of gains from treatment as well as on baseline-pretreatment levels. Components of the gains known to the agents and acted on by them may not be known by the observing economist. Such models are called correlated random coefficient models. Sorting on unobserved components of gains complicates the interpretation of what IV estimates. This paper examines testable implications of the hypothesis that agents do not sort into treatment based on gains. In it, we develop new tests to gauge the empirical relevance of the correlated random coefficient model to examine whether the additional complications associated with it are required. We examine the power of the proposed tests. We derive a new representation of the variance of the instrumental variable estimator for the correlated random coefficient model. We apply the methods in this paper to the prototypical empirical problem of estimating the return to schooling and find evidence of sorting into schooling based on unobserved components of gains.

    Advances in ranking and selection: variance estimation and constraints

    Get PDF
    In this thesis, we first show that the performance of ranking and selection (R&S) procedures in steady-state simulations depends highly on the quality of the variance estimates that are used. We study the performance of R&S procedures using three variance estimators --- overlapping area, overlapping Cramer--von Mises, and overlapping modified jackknifed Durbin--Watson estimators --- that show better long-run performance than other estimators previously used in conjunction with R&S procedures for steady-state simulations. We devote additional study to the development of the new overlapping modified jackknifed Durbin--Watson estimator and demonstrate some of its useful properties. Next, we consider the problem of finding the best simulated system under a primary performance measure, while also satisfying stochastic constraints on secondary performance measures, known as constrained ranking and selection. We first present a new framework that allows certain systems to become dormant, halting sampling for those systems as the procedure continues. We also develop general procedures for constrained R&S that guarantee a nominal probability of correct selection, under any number of constraints and correlation across systems. In addition, we address new topics critical to efficiency of the these procedures, namely the allocation of error between feasibility check and selection, the use of common random numbers, and the cost of switching between simulated systems.Ph.D.Committee Co-chairs: Sigrun Andradottir, Dave Goldsman and Seong-Hee Kim; Committee Members:Shabbir Ahmed and Brani Vidakovi
    corecore