Practical recommendations for the evaluation of improvement initiatives

Abstract

Abstract A lack of clear guidance for funders, evaluators and improvers on what to include in evaluation proposals can lead to evaluation designs that do not answer the questions stakeholders want to know. These evaluation designs may not match the iterative nature of improvement and may be imposed onto an initiative in a way that is impractical from the perspective of improvers and the communities with whom they work. Consequently, the results of evaluations are often controversial, and attribution remains poorly understood. Improvement initiatives are iterative, adaptive and context-specific. Evaluation approaches and designs must align with these features, specifically in their ability to consider complexity, to evolve as the initiative adapts over time and to understand the interaction with local context. Improvement initiatives often identify broadly defined change concepts and provide tools for care teams to tailor these in more detail to local conditions. Correspondingly, recommendations for evaluation are best provided as broad guidance, to be tailored to the specifics of the initiative. In this paper, we provide practical guidance and recommendations that funders and evaluators can use when developing an evaluation plan for improvement initiatives that seeks to: identify the questions stakeholders want to address; develop the initial program theory of the initiative; identify high-priority areas to measure progress over time; describe the context the initiative will be applied within; and identify experimental or observational designs that will address attribution

Similar works

This paper was published in Harvard University - DASH.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.