19,698 research outputs found
The Elephant in the Corner: A Cautionary Tale about Measurement Error in Treatment Effects Models
Researchers in economics and other disciplines are often interested in the causal effect of a binary treatment on outcomes. Econometric methods used to estimate such effects are divided into one of two strands depending on whether they require the conditional independence assumption (i.e., independence of potential outcomes and treatment assignment conditional on a set of observable covariates). When this assumption holds, researchers now have a wide array of estimation techniques from which to choose. However, very little is known about their performance β both in absolute and relative terms β when measurement error is present. In this study, the performance of several estimators that require the conditional independence assumption, as well as some that do not, are evaluated in a Monte Carlo study. In all cases, the data-generating process is such that conditional independence holds with the 'real' data. However, measurement error is then introduced. Specifically, three types of measurement error are considered: (i) errors in treatment assignment, (ii) errors in the outcome, and (iii) errors in the vector of covariates. Recommendations for researchers are provided.treatment effects, propensity score, unconfoundedness, selection on observables, measurement error
Treatment effect estimation with covariate measurement error
This paper investigates the effect that covariate measurement error has on a conventional treatment effect analysis built on an unconfoundedness restriction that embodies conditional independence restrictions in which there is conditioning on error free covariates. The approach uses small parameter asymptotic methods to obtain the approximate generic effects of measurement error. The approximations can be estimated using data on observed outcomes, the treatment indicator and error contaminated covariates providing an indication of the nature and size of measurement error effects. The approximations can be used in a sensitivity analysis to probe the potential effects of measurement error on the evaluation of treatment effects
Recommended from our members
Predicting with sparse data
It is well known that effective prediction of project cost related factors is an important aspect of software engineering. Unfortunately, despite extensive research over more than 30 years, this remains a significant problem for many practitioners. A major obstacle is the absence of reliable and systematic historic data, yet this is a sine qua non for almost all proposed methods: statistical, machine learning or calibration of existing models. In this paper we describe our sparse data method (SDM) based upon a pairwise comparison technique and Saaty's Analytic Hierarchy Process (AHP). Our minimum data requirement is a single known point. The technique is supported by a software tool known as DataSalvage. We show, for data from two companies, how our approach β based upon expert judgement β adds value to expert judgement by producing significantly more accurate and less biased results. A sensitivity analysis shows that our approach is robust to pairwise comparison errors. We then describe the results of a small usability trial with a practising project manager. From this empirical work we conclude that the technique is promising and may help overcome some of the present barriers to effective project prediction
Treatment effect estimation with covariate measurement error
This paper investigates the effect that covariate measurement error has on a conventional treatment effect analysis built on an unconfoundedness restriction that embodies conditional independence restrictions in which there is conditioning on error free covariates. The approach uses small parameter asymptotic methods to obtain the approximate generic effects of measurement error. The approximations can be estimated using data on observed outcomes, the treatment indicator and error contaminated covariates providing an indication of the nature and size of measurement error effects. The approximations can be used in a sensitivity analysis to probe the potential effects of measurement error on the evaluation of treatment effects.
Combining long memory and level shifts in modeling and forecasting the volatility of asset returns
We propose a parametric state space model of asset return volatility with an accompanying estimation and forecasting framework that allows for ARFIMA dynamics, random level shifts and measurement errors. The Kalman filter is used to construct the state-augmented likelihood function and subsequently to generate forecasts, which are mean- and path-corrected. We apply our model to eight daily volatility series constructed from both high-frequency and daily returns. Full sample parameter estimates reveal that random level shifts are present in all series. Genuine long memory is present in high-frequency measures of volatility whereas there is little remaining dynamics in the volatility measures constructed using daily returns. From extensive forecast evaluations, we find that our ARFIMA model with random level shifts consistently belongs to the 10% Model Confidence Set across a variety of forecast horizons, asset classes, and volatility measures. The gains in forecast accuracy can be very pronounced, especially at longer horizons
- β¦