Managers of workforce training programs are often unable to afford costly, full-fledged experimental or nonexperimental evaluations to determine their programs’ impacts. Therefore, many rely on the survey responses of program participants to gauge program impacts.
Smith, Whalley, and Wilcox present the first attempt to assess such measures despite their already widespread use in program evaluations. They develop a multidisciplinary framework for addressing the issue and apply it to three case studies: the National Job Training Partnership Act Study, the U.S. National Supported Work Demonstration, and the Connecticut Jobs First Program.
Each of these studies were subjected to experimental evaluations that included a survey-based participant evaluation measure. The authors apply econometric methods specifically developed to obtain estimates of program impacts among individuals in the studies and then compare these estimates with survey-based participant evaluation measures to obtain an assessment of the surveys’ efficacy.
The authors also discuss how their findings fit into the broader literatures in economics, psychology, and survey research.https://research.upjohn.org/up_press/1285/thumbnail.jp