5,191 research outputs found
Optimal inference in a class of regression models
We consider the problem of constructing confidence intervals (CIs) for a
linear functional of a regression function, such as its value at a point, the
regression discontinuity parameter, or a regression coefficient in a linear or
partly linear regression. Our main assumption is that the regression function
is known to lie in a convex function class, which covers most smoothness and/or
shape assumptions used in econometrics. We derive finite-sample optimal CIs and
sharp efficiency bounds under normal errors with known variance. We show that
these results translate to uniform (over the function class) asymptotic results
when the error distribution is not known. When the function class is
centrosymmetric, these efficiency bounds imply that minimax CIs are close to
efficient at smooth regression functions. This implies, in particular, that it
is impossible to form CIs that are tighter using data-dependent tuning
parameters, and maintain coverage over the whole function class. We specialize
our results to inference on the regression discontinuity parameter, and
illustrate them in simulations and an empirical application.Comment: 39 pages plus supplementary material
Classification via local multi-resolution projections
We focus on the supervised binary classification problem, which consists in
guessing the label associated to a co-variate , given a set of
independent and identically distributed co-variates and associated labels
. We assume that the law of the random vector is unknown and
the marginal law of admits a density supported on a set \A. In the
particular case of plug-in classifiers, solving the classification problem
boils down to the estimation of the regression function \eta(X) = \Exp[Y|X].
Assuming first \A to be known, we show how it is possible to construct an
estimator of by localized projections onto a multi-resolution analysis
(MRA). In a second step, we show how this estimation procedure generalizes to
the case where \A is unknown. Interestingly, this novel estimation procedure
presents similar theoretical performances as the celebrated local-polynomial
estimator (LPE). In addition, it benefits from the lattice structure of the
underlying MRA and thus outperforms the LPE from a computational standpoint,
which turns out to be a crucial feature in many practical applications.
Finally, we prove that the associated plug-in classifier can reach super-fast
rates under a margin assumption.Comment: 38 pages, 6 figure
- …