475 research outputs found
A Note on Minimax Testing and Confidence Intervals in Moment Inequality Models
This note uses a simple example to show how moment inequality models used in
the empirical economics literature lead to general minimax relative efficiency
comparisons. The main point is that such models involve inference on a low
dimensional parameter, which leads naturally to a definition of "distance"
that, in full generality, would be arbitrary in minimax testing problems. This
definition of distance is justified by the fact that it leads to a duality
between minimaxity of confidence intervals and tests, which does not hold for
other definitions of distance. Thus, the use of moment inequalities for
inference in a low dimensional parametric model places additional structure on
the testing problem, which leads to stronger conclusions regarding minimax
relative efficiency than would otherwise be possible
Sensitivity Analysis using Approximate Moment Condition Models
We consider inference in models defined by approximate moment conditions. We
show that near-optimal confidence intervals (CIs) can be formed by taking a
generalized method of moments (GMM) estimator, and adding and subtracting the
standard error times a critical value that takes into account the potential
bias from misspecification of the moment conditions. In order to optimize
performance under potential misspecification, the weighting matrix for this GMM
estimator takes into account this potential bias, and therefore differs from
the one that is optimal under correct specification. To formally show the
near-optimality of these CIs, we develop asymptotic efficiency bounds for
inference in the locally misspecified GMM setting. These bounds may be of
independent interest, due to their implications for the possibility of using
moment selection procedures when conducting inference in moment condition
models. We apply our methods in an empirical application to automobile demand,
and show that adjusting the weighting matrix can shrink the CIs by a factor of
3 or more.Comment: 69 pages, plus a 12-page supplemental appendi
Simple and Honest Confidence Intervals in Nonparametric Regression
We consider the problem of constructing honest confidence intervals (CIs) for
a scalar parameter of interest, such as the regression discontinuity parameter,
in nonparametric regression based on kernel or local polynomial estimators. To
ensure that our CIs are honest, we use critical values that take into account
the possible bias of the estimator upon which the CIs are based. We show that
this approach leads to CIs that are more efficient than conventional CIs that
achieve coverage by undersmoothing or subtracting an estimate of the bias. We
give sharp efficiency bounds of using different kernels, and derive the optimal
bandwidth for constructing honest CIs. We show that using the bandwidth that
minimizes the maximum mean-squared error results in CIs that are nearly
efficient and that in this case, the critical value depends only on the rate of
convergence. For the common case in which the rate of convergence is
, the appropriate critical value for 95% CIs is 2.18, rather than the
usual 1.96 critical value. We illustrate our results in a Monte Carlo analysis
and an empirical application.Comment: 46 pages, plus a 54-page supplemental appendi
Optimal inference in a class of regression models
We consider the problem of constructing confidence intervals (CIs) for a
linear functional of a regression function, such as its value at a point, the
regression discontinuity parameter, or a regression coefficient in a linear or
partly linear regression. Our main assumption is that the regression function
is known to lie in a convex function class, which covers most smoothness and/or
shape assumptions used in econometrics. We derive finite-sample optimal CIs and
sharp efficiency bounds under normal errors with known variance. We show that
these results translate to uniform (over the function class) asymptotic results
when the error distribution is not known. When the function class is
centrosymmetric, these efficiency bounds imply that minimax CIs are close to
efficient at smooth regression functions. This implies, in particular, that it
is impossible to form CIs that are tighter using data-dependent tuning
parameters, and maintain coverage over the whole function class. We specialize
our results to inference on the regression discontinuity parameter, and
illustrate them in simulations and an empirical application.Comment: 39 pages plus supplementary material
Unbiased Instrumental Variables Estimation Under Known First-Stage Sign
We derive mean-unbiased estimators for the structural parameter in
instrumental variables models with a single endogenous regressor where the sign
of one or more first stage coefficients is known. In the case with a single
instrument, there is a unique non-randomized unbiased estimator based on the
reduced-form and first-stage regression estimates. For cases with multiple
instruments we propose a class of unbiased estimators and show that an
estimator within this class is efficient when the instruments are strong. We
show numerically that unbiasedness does not come at a cost of increased
dispersion in models with a single instrument: in this case the unbiased
estimator is less dispersed than the 2SLS estimator. Our finite-sample results
apply to normal models with known variance for the reduced-form errors, and
imply analogous results under weak instrument asymptotics with an unknown error
distribution
- …