2,546,534 research outputs found

    Robust Standard Errors for Robust Estimators

    Get PDF
    A regression estimator is said to be robust if it is still reliable in the presence of outliers. On the other hand, its standard error is said to be robust if it is still reliable when the regression errors are autocorrelated and/or heteroskedastic. This paper shows how robust standard errors can be computed for several robust estimators of regression, including MMestimators. The improvement relative to non-robust standard errors is illustrated by means of large-sample bias calculations, simulations, and a real data example. It turns out that non-robust standard errors of robust estimators may be severely biased. However, if autocorrelation and heteroscedasticity are absent, non-robust standard errors are more e.cient than the robust standard errors that we propose. We therefore also present a test of the hypothesis that the robust and non-robust standard errors have the same probability limit.robust regression, robust standard errors, autocorrelation, heteroskedasticity

    When Should You Adjust Standard Errors for Clustering?

    Full text link
    In empirical work in economics it is common to report standard errors that account for clustering of units. Typically, the motivation given for the clustering adjustments is that unobserved components in outcomes for units within clusters are correlated. However, because correlation may occur across more than one dimension, this motivation makes it difficult to justify why researchers use clustering in some dimensions, such as geographic, but not others, such as age cohorts or gender. It also makes it difficult to explain why one should not cluster with data from a randomized experiment. In this paper, we argue that clustering is in essence a design problem, either a sampling design or an experimental design issue. It is a sampling design issue if sampling follows a two stage process where in the first stage, a subset of clusters were sampled randomly from a population of clusters, while in the second stage, units were sampled randomly from the sampled clusters. In this case the clustering adjustment is justified by the fact that there are clusters in the population that we do not see in the sample. Clustering is an experimental design issue if the assignment is correlated within the clusters. We take the view that this second perspective best fits the typical setting in economics where clustering adjustments are used. This perspective allows us to shed new light on three questions: (i) when should one adjust the standard errors for clustering, (ii) when is the conventional adjustment for clustering appropriate, and (iii) when does the conventional adjustment of the standard errors matter

    Standard error and confidence interval for QALY weights

    Get PDF
    There are some problems with the standard errors of QALY weights proposed by Groot (2000, Journal of Health Economics 19). The standard errors show smaller values than those of Groot when we recalculate using his method. Moreover, we correct the derivation of his approximation and derive corrected values. Because mean and variance do not exist for a distribution of QALY weights, using standard errors for statistical inference may lead to problems even when an approximation is used. In this paper, we verify the statistical properties of Groot's standard errors by simulation. We find that the corrected standard errors hold the same properties as a normal distribution under specific conditions. In general, however, it would be appropriate to use our simulation method to obtain critical values or p-value.

    Estimating Standard Errors For The Parks Model: Can Jackknifing Help?

    Get PDF
    Non-spherical errors, namely heteroscedasticity, serial correlation and cross-sectional correlation are commonly present within panel data sets. These can cause significant problems for econometric analyses. The FGLS(Parks) estimator has been demonstrated to produce considerable efficiency gains in these settings. However, it suffers from underestimation of coefficient standard errors, oftentimes severe. Potentially, jackknifing the FGLS(Parks) estimator could allow one to maintain the efficiency advantages of FGLS(Parks) while producing more reliable estimates of coefficient standard errors. Accordingly, this study investigates the performance of the jackknife estimator of FGLS(Parks) using Monte Carlo experimentation. We find that jackknifing can -- in narrowly defined situations -- substantially improve the estimation of coefficient standard errors. However, its overall performance is not sufficient to make it a viable alternative to other panel data estimators.Panel Data estimation; Parks model; cross-sectional correlation; jackknife; Monte Carlo

    Discovery reach for non-standard interactions in a neutrino factory

    Get PDF
    We study the discovery reach for Non-Standard Interactions (NSIs) in a neutrino factory experiment. After giving a theoretical, but model-independent, overview of the most relevant classes of NSIs, we present detailed numerical results for some of them. Our simulations take into account matter effects, uncertainties in the neutrino oscillation parameters, systematical errors, parameter correlations, and degeneracies. We perform scans of the parameter space, and show that a neutrino factory has excellent prospects of detecting NSIs originating from new physics at around 1 TeV, which is a scale favored by many extensions of the standard model. It will also turn out that the discovery reach depends strongly on the standard and non-standard CP violating phases in the Lagrangian.Comment: RevTeX 4, 10 pages, 5 figures, extended discussion of systematical errors and of existing bounds, matches published versio
    corecore