research

Was There a Riverside Miracle? A Framework for Evaluating Multi-Site Programs

Abstract

This paper uses data from the Greater Avenues for Independence (GAIN) initiative to discuss the evaluation of programs that are implemented at multiple sites. Two frequently used methods are to pool the data or to use fixed effects (an extreme version of which estimates separate models for each site). The former approach, however, ignores site effects. Though the latter estimates site effects, it lacks a framework for predicting the impact in subsequent implementations of the program (e.g., will a new implementation resemble Riverside or Alameda?). I develop a model for earnings that lies between these two extremes. For the GAIN data, I show that most of the differences across sites are due to differences in the composition of participants. I show also that uncertainty regarding predicting site effects is important; when the predictive uncertainty is ignored, the treatment impact for the Riverside sites is significant, but when we consider predictive uncertainty, the impact for the Riverside sites is insignificant. Finally, I demonstrate that the model is able to extrapolate site effects with reasonable accuracy, when the site for which the prediction is being made does not differ substantially from the sites already observed. For example, the San Diego treatment effects could have been predicted based on observable site characteristics, but the Riverside effects are consistently underestimated.

    Similar works