2,865 research outputs found

    The Importance of Scale for Spatial-Confounding Bias and Precision of Spatial Regression Estimators

    Get PDF
    Residuals in regression models are often spatially correlated. Prominent examples include studies in environmental epidemiology to understand the chronic health effects of pollutants. I consider the effects of residual spatial structure on the bias and precision of regression coefficients, developing a simple framework in which to understand the key issues and derive informative analytic results. When unmeasured confounding introduces spatial structure into the residuals, regression models with spatial random effects and closely-related models such as kriging and penalized splines are biased, even when the residual variance components are known. Analytic and simulation results show how the bias depends on the spatial scales of the covariate and the residual: one can reduce bias by fitting a spatial model only when there is variation in the covariate at a scale smaller than the scale of the unmeasured confounding. I also discuss how the scales of the residual and the covariate affect efficiency and uncertainty estimation when the residuals are independent of the covariate. In an application on the association between black carbon particulate matter air pollution and birth weight, controlling for large-scale spatial variation appears to reduce bias from unmeasured confounders, while increasing uncertainty in the estimated pollution effect.Comment: Published in at http://dx.doi.org/10.1214/10-STS326 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Experimental Design for Sensitivity Analysis, Optimization and Validation of Simulation Models

    Get PDF
    This chapter gives a survey on the use of statistical designs for what-if analysis in simula- tion, including sensitivity analysis, optimization, and validation/verification. Sensitivity analysis is divided into two phases. The first phase is a pilot stage, which consists of screening or searching for the important factors among (say) hundreds of potentially important factors. A novel screening technique is presented, namely sequential bifurcation. The second phase uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as a metamodel or a response surface. Regression analysis gives better results when the simu- lation experiment is well designed, using either classical statistical designs (such as frac- tional factorials) or optimal designs (such as pioneered by Fedorov, Kiefer, and Wolfo- witz). To optimize the simulated system, the analysts may apply Response Surface Metho- dology (RSM); RSM combines regression analysis, statistical designs, and steepest-ascent hill-climbing. To validate a simulation model, again regression analysis and statistical designs may be applied. Several numerical examples and case-studies illustrate how statisti- cal techniques can reduce the ad hoc character of simulation; that is, these statistical techniques can make simulation studies give more general results, in less time. Appendix 1 summarizes confidence intervals for expected values, proportions, and quantiles, in termi- nating and steady-state simulations. Appendix 2 gives details on four variance reduction techniques, namely common pseudorandom numbers, antithetic numbers, control variates or regression sampling, and importance sampling. Appendix 3 describes jackknifing, which may give robust confidence intervals.least squares;distribution-free;non-parametric;stopping rule;run-length;Von Neumann;median;seed;likelihood ratio

    Experimental Design for Sensitivity Analysis of Simulation Models

    Get PDF
    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as metamodel, response surface, compact model, emulator, etc.Regression analysis gives better results when the simulation experiment is well designed, using classical statistical designs (such as fractional factorials, including 2 k-p designs).These statistical techniques reduce the ad hoc character of simulation; that is, these techniques can make simulation studies give more general results, in less time.experimental design;simulation models;sensitivity analysis;regression analysis

    FDI and Taxation: A Meta-Study

    Get PDF
    Despite the continuing political interest in the usefulness of tax competition and tax coordination as well as the wealth of theoretical analyses, it still remains open whether or when tax competition is harmful. Moreover, the influence of tax differentials on multinationals’ decisions is still insufficiently analyzed. Thus, economists have increasingly resorted to empirical analysis in order to gain insights on the elasticity of FDI with respect to company taxation. As a result, the empirical literature on taxation and international capital flows has grown to a similar abundance during the last 25 years as the respective theoretical literature. Its heterogeneity leads to a rising need for concise reviews on the existing empirical evidence. In this paper we extend former meta-analyses on FDI and taxation in three ways. First, we add the most recent publications unconsidered in meta-analyses up-to-date. Second, we apply a different methodology by using a broad set of meta-regression estimators and explicitly discuss which one is most suitable for application to our meta-data. Third, we address some important issues in research on FDI and taxation to the clarification of which meta-analysis can make valuable contributions. These issues are mainly: The influence of variables which might moderate effects of tax differentials (e.g. public spending), the implications of using aggregate FDI data as opposed to firm-level information on measured tax effects, the implications of bilateral effective tax rates, and the possible presence of publication bias in primary research.corporate income taxation, foreign direct investment, meta analysis

    FDI and Taxation: A Meta-Study

    Get PDF
    Despite the continuing political interest in the usefulness of tax competition and tax coordination as well as the wealth of theoretical analyses, it still remains open whether or when tax competition is harmful. Moreover, the influence of tax differentials on multinationals' decisions is still insufficiently analyzed. Thus, economists have increasingly resorted to empirical analysis in order to gain insights on the elasticity of FDI with respect to company taxation. As a result, the empirical literature on taxation and international capital flows has grown to a similar abundance during the last 25 years as the respective theoretical literature. Its heterogeneity leads to a rising need for concise reviews on the existing empirical evidence. In this paper we extend former meta-analyses on FDI and taxation in three ways. First, we add the most recent publications unconsidered in meta-analyses up-to-date. Second, we apply a different methodology by using a broad set of meta-regression estimators and explicitly discuss which one is most suitable for application to our meta-data. Third, we address some important issues in research on FDI and taxation to the clarification of which meta-analysis can make valuable contributions. These issues are mainly: The influence of variables which might moderate effects of tax differentials (e.g. public spending), the implications of using aggregate FDI data as opposed to firm-level information on measured tax effects, the implications of bilateral effective tax rates, and the possible presence of publication bias in primary research. --Corporate Income Taxation,Foreign Direct Investment,Meta Analysis

    Simulation Experiments in Practice: Statistical Design and Regression Analysis

    Get PDF
    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independently distributed with a constant variance; moreover, the regression (meta)model of the simulation model’s I/O behaviour is assumed to have residuals with zero means. This article addresses the following practical questions: (i) How realistic are these assumptions, in practice? (ii) How can these assumptions be tested? (iii) If assumptions are violated, can the simulation's I/O data be transformed such that the assumptions do hold? (iv) If not, which alternative statistical methods can then be applied?metamodel;experimental design;jackknife;bootstrap;common random numbers;validation

    Regulation and Measuring Cost Efficiency with Panel Data Models: Application to Electricity Distribution Utilities

    Get PDF
    This paper examines the application of different parametric methods to measure cost efficiency of electricity distribution utilities. The cost frontier model is estimated using four methods: Displaced Ordinary Least Squares, Fixed Effects, Random Effects and Maximum Likelihood Estimation. These methods are applied to a sample of 59 distribution utilities in Switzerland. The data consist of an unbalanced panel over a nine-year period from 1988 to 1996. Different specifications are compared with regards to the estimation of cost frontier characteristics and inefficiency scores. The results point to some advantages for the FE model in the estimation of cost function’s characteristics. The mutual consistency of different methods with regard to efficiency measures is analyzed. The results are mixed. The summary statistics of inefficiency estimates are not sensitive to the specification. However, the ranking changes significantly from one model to another. In particular, the least and most efficient companies are not identical across different methods. These results suggest that a valid benchmarking analysis should be applied with special care. It is recommended that the regulator use several specifications and perform a (mutual) consistency analysis. Finally, the out-of-sample prediction errors of different models are analyzed. The results suggest that benchmarking methods can be used as a control instrument in order to narrow the information gap between the regulator and regulated companies.

    Panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable

    Get PDF
    This paper surveys panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable. In particular, it focuses on the specification and estimation of four panel data models commonly used in applied research: the fixed effects model, the random effects model, the fixed coefficients model and the random coefficients model. This survey should prove useful for researchers in this area.
    corecore