29 research outputs found
Implementing the LifeSkills Training drug prevention program: factors related to implementation fidelity
<p>Abstract</p> <p>Background</p> <p>Widespread replication of effective prevention programs is unlikely to affect the incidence of adolescent delinquency, violent crime, and substance use until the quality of implementation of these programs by community-based organizations can be assured.</p> <p>Methods</p> <p>This paper presents the results of a process evaluation employing qualitative and quantitative methods to assess the extent to which 432 schools in 105 sites implemented the LifeSkills Training (LST) drug prevention program with fidelity. Regression analysis was used to examine factors influencing four dimensions of fidelity: adherence, dosage, quality of delivery, and student responsiveness.</p> <p>Results</p> <p>Although most sites faced common barriers, such as finding room in the school schedule for the program, gaining full support from key participants (i.e., site coordinators, principals, and LST teachers), ensuring teacher participation in training workshops, and classroom management difficulties, most schools involved in the project implemented LST with very high levels of fidelity. Across sites, 86% of program objectives and activities required in the three-year curriculum were delivered to students. Moreover, teachers were observed using all four recommended teaching practices, and 71% of instructors taught all the required LST lessons. Multivariate analyses found that highly rated LST program characteristics and better student behavior were significantly related to a greater proportion of material taught by teachers (adherence). Instructors who rated the LST program characteristics as ideal were more likely to teach all lessons (dosage). Student behavior and use of interactive teaching techniques (quality of delivery) were positively related. No variables were related to student participation (student responsiveness).</p> <p>Conclusion</p> <p>Although difficult, high implementation fidelity by community-based organizations can be achieved. This study suggests some important factors that organizations should consider to ensure fidelity, such as selecting programs with features that minimize complexity while maximizing flexibility. Time constraints in the classroom should be considered when choosing a program. Student behavior also influences program delivery, so schools should train teachers in the use of classroom management skills. This project involved comprehensive program monitoring and technical assistance that likely facilitated the identification and resolution of problems and contributed to the overall high quality of implementation. Schools should recognize the importance of training and technical assistance to ensure quality program delivery.</p
Quality of intervention delivery in a cluster randomised controlled trial : a qualitative observational study with lessons for implementation fidelity
Abstract Background Understanding intervention fidelity is an essential part of the evaluation of complex interventions because fidelity not only affects the validity of trial findings, but also because studies of fidelity can be used to identify barriers and facilitators to successful implementation, and so provide important information about factors likely to impact the uptake of the intervention into clinical practice. Participant observation methods have been identified as being particularly valuable in studies of fidelity, yet are rarely used. This study aimed to use these methods to explore the quality of implementation of a complex intervention (Safewards) on mental health wards during a cluster randomised controlled trial. Specific aims were firstly to describe the different ways in which the intervention was implemented, and secondly to explore the contextual factors moderating the quality of intervention delivery, in order to inform ‘real world’ implementation of the intervention. Methods Safewards was implemented on 16 mental health wards in England. We used Research Assistants (RAs) trained in participant observation to record qualitative observational data on the quality of intervention delivery (n = 565 observations). At the end of the trial, two focus groups were conducted with RAs, which were used to develop the coding framework. Data were analysed using thematic analysis. Results There was substantial variation in intervention delivery between wards. We observed modifications to the intervention which were both fidelity consistent and inconsistent, and could enhance or dilute the intervention effects. We used these data to develop a typology which describes the different ways in which the intervention was delivered. This typology could be used as a tool to collect qualitative observational data about fidelity during trials. Moderators of Safewards implementation included systemic, interpersonal, and individual factors and patient responses to the intervention. Conclusions Our study demonstrates how, with appropriate training in participant observation, RAs can collect high-quality observational data about the quality of intervention delivery during a trial, giving a more complete picture of ‘fidelity’ than measurements of adherence alone. Trial registration ISRCTN registry; IRSCTN38001825 . Registered 29 August 201
Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda
An unresolved issue in the field of implementation research is how to conceptualize and evaluate successful implementation. This paper advances the concept of “implementation outcomes” distinct from service system and clinical treatment outcomes. This paper proposes a heuristic, working “taxonomy” of eight conceptually distinct implementation outcomes—acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration, and sustainability—along with their nominal definitions. We propose a two-pronged agenda for research on implementation outcomes. Conceptualizing and measuring implementation outcomes will advance understanding of implementation processes, enhance efficiency in implementation research, and pave the way for studies of the comparative effectiveness of implementation strategies
Methods and metrics challenges of delivery-system research
<p>Abstract</p> <p>Background</p> <p>Many delivery-system interventions are fundamentally about change in social systems (both planned and unplanned). This systems perspective raises a number of methodological challenges for studying the effects of delivery-system change--particularly for answering questions related to whether the change will work under different conditions and how the change is integrated (or not) into the operating context of the delivery system.</p> <p>Methods</p> <p>The purpose of this paper is to describe the methodological and measurement challenges posed by five key issues in delivery-system research: (1) modeling intervention context; (2) measuring readiness for change; (3) assessing intervention fidelity and sustainability; (4) assessing complex, multicomponent interventions; and (5) incorporating time in delivery-system models to discuss recommendations for addressing these issues. For each issue, we provide recommendations for how research may be designed and implemented to overcome these challenges.</p> <p>Results and conclusions</p> <p>We suggest that a more refined understanding of the mechanisms underlying delivery-system interventions (treatment theory) and the ways in which outcomes for different classes of individuals change over time are fundamental starting points for capturing the heterogeneity in samples of individuals exposed to delivery-system interventions. To support the research recommendations outlined in this paper and to advance understanding of the "why" and "how" questions of delivery-system change and their effects, funding agencies should consider supporting studies with larger organizational sample sizes; longer duration; and nontraditional, mixed-methods designs.</p> <p>A version of this paper was prepared under contract with the Agency for Healthcare Research and Quality (AHRQ), US Department of Health and Human Services for presentation and discussion at a meeting on "The Challenge and Promise of Delivery System Research," held in Sterling, VA, on February 16-17, 2011. The opinions in the paper are those of the author and do not represent the views or recommendations of AHRQ or the US Department of Health and Human Services.<sup>1</sup></p