528,825 research outputs found

    Consistent HAC Estimation and Robust Regression Testing Using Sharp Origin Kernels with No Truncation

    Get PDF
    A new family of kernels is suggested for use in heteroskedasticity and autocorrelation consistent (HAC) and long run variance (LRV) estimation and robust regression testing. The kernels are constructed by taking powers of the Bartlett kernel and are intended to be used with no truncation (or bandwidth) parameter. As the power parameter (rho) increases, the kernels become very sharp at the origin and increasingly downweight values away fro the origin, thereby achieving effects similar to a bandwidth parameter. Sharp origin kernels can be used in regression testing in much the same way as conventional kernels with no truncation, as suggested in the work of Kiefer and Vogelsang (2002a, 2002b). A unified representation of HAC limit theory for untruncated kernels is provided using a new proof based on Mercer's theorem that allows for kernels which may or may not be differentiable at the origin. This new representation helps to explain earlier findings like the dominance of the Bartlett kernel over quadratic kernels in test power and yields new findings about the asymptotic properties of tests with sharp origin kernels. Analysis and simulations indicate that sharp origin kernels lead to tests with improved size properties relative to conventional tests and better power properties than other tests using Bartlett and other conventional kernels without truncation. If rho is passed to infinity with the sample size (T), the new kernels provide consistent HAC and LRV estimates as well as continued robust regression testing. Optimal choice of rho based on minimizing the asymptotic mean squared error of estimation is considered, leading to a rate of convergence of the kernel estimate of T^{1/3}, analogous to that of a conventional truncated Bartlett kernel estimate with an optimal choice of bandwidth. A data-based version of the consistent sharp origin kernel is obtained which is easily implementable in practical work. Within this new framework, untruncated kernel estimation can be regarded as a form of conventional kernel estimation in which the usual bandwidth parameter is replaced by a power parameter that serves to control the degree of downweighting. Simulations show that in regression testing with the sharp origin kernel, the power properties are better than those with simple untruncated kernels (where rho = 1) and at least as good as those with truncated kernels. Size is generally more accurate with sharp origin kernels than truncated kernels. In practice a simple fixed choice of the exponent parameter around rho = 16 for the sharp origin kernel produces favorable results for both size and power in regression testing with sample sizes that are typical in econometric applications.Consistent HAC estimation, Data determined kernel estimation, Long run variance, Mercer?s theorem, Power parameter, Sharp origin kernel

    Standard errors estimation in the presence of high leverage point and heteroscedastic errors in multiple linear regression

    Get PDF
    In this study, the Robust Heteroscedastic Consistent Covariance Matrix (RHCCM) was proposed in order to estimate standard errors of regression coefficients in the presence of high leverage points and heteroscedastic errors in multiple linear regression. Robust Heteroscedastic Consistent Covariance Matrix (RHCCM) is the combination of a robust method and Heteroscedasticit Consistent Covariance Matrix (HCCM). The robust method is used to eliminate the effect of high leverage points while HCCM is mainly used to eliminate the effect of heteroscedastic errors. The performance of RHCCM was assessed through an empirical study and compared with results obtained when the original Heteroscedastic Consistent Covariance Matrix was used

    Cross-validation in nonparametric regression with outliers

    Get PDF
    A popular data-driven method for choosing the bandwidth in standard kernel regression is cross-validation. Even when there are outliers in the data, robust kernel regression can be used to estimate the unknown regression curve [Robust and Nonlinear Time Series Analysis. Lecture Notes in Statist. (1984) 26 163--184]. However, under these circumstances standard cross-validation is no longer a satisfactory bandwidth selector because it is unduly influenced by extreme prediction errors caused by the existence of these outliers. A more robust method proposed here is a cross-validation method that discounts the extreme prediction errors. In large samples the robust method chooses consistent bandwidths, and the consistency of the method is practically independent of the form in which extreme prediction errors are discounted. Additionally, evaluation of the method's finite sample behavior in a simulation demonstrates that the proposed method performs favorably. This method can also be applied to other problems, for example, model selection, that require cross-validation.Comment: Published at http://dx.doi.org/10.1214/009053605000000499 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Regression Discontinuity Designs Using Covariates

    Full text link
    We study regression discontinuity designs when covariates are included in the estimation. We examine local polynomial estimators that include discrete or continuous covariates in an additive separable way, but without imposing any parametric restrictions on the underlying population regression functions. We recommend a covariate-adjustment approach that retains consistency under intuitive conditions, and characterize the potential for estimation and inference improvements. We also present new covariate-adjusted mean squared error expansions and robust bias-corrected inference procedures, with heteroskedasticity-consistent and cluster-robust standard errors. An empirical illustration and an extensive simulation study is presented. All methods are implemented in \texttt{R} and \texttt{Stata} software packages

    Cointegrating Regressions with Messy Regressors: Missingness, Mixed Frequency, and Measurement Error

    Get PDF
    We consider a cointegrating regression in which the integrated regressors are messy in the sense that they contain data that may be mismeasured, missing, observed at mixed frequencies, or have other irregularities that cause the econometrician to observe them with mildly nonstationary noise. Least squares estimation of the cointegrating vector is consistent. Existing prototypical variancebased estimation techniques, such as canonical cointegrating regression (CCR), are both consistent and asymptotically mixed normal. This result is robust to weakly dependent but possibly nonstationary disturbances.cointegration, canonical cointegrating regression, near-epoch dependence, messy data, missing data, mixed-frequency data, measurement error, interpolation
    corecore