9 research outputs found
How do researchers conceptualize and plan for the sustainability of their NIH R01 implementation projects?
Abstract Background Inadequate sustainability of implementation of evidence-based interventions has led to calls for research on how sustainability can be optimized. To advance our understanding of intervention sustainability, we explored how implementation researchers conceptualized and planned for the sustainability of their implemented interventions with studies funded by the United States (US) National Institutes of Health (NIH). Methods We used sequential, mixed methods to explore how researchers conceptualized and planned for the sustainability of the health interventions using (1) a document review of all active and completed US NIH R01 Grants and Equivalents reviewed within the Dissemination and Implementation Research in Health (DIRH) Study Section between 2004 and 2016 and (2) a qualitative content analysis of semi-structured interviews with NIH R01 DIRH grant recipients. Results We found 277 R01 profiles within the DIRH study section listed on the US NIH RePORTER website including 84 that were eligible for screening. Of the 84 unique projects, 76 (90.5%) had primary implementation outcomes. Of the 76 implementation project profiles, 51 (67.1%) made references to sustainability and none referred to sustainability planning. In both profiles and interviews, researchers conceptualized sustainability primarily as the continued delivery of interventions, programs, or implementation strategies. Few researchers referenced frameworks with sustainability constructs and offered limited information on how they operationalized frameworks. Researchers described broad categories of approaches and strategies to promote sustainability and key factors that may influence researchers to plan for sustainability, such as personal beliefs, self-efficacy, perception of their role, and the challenges of the grant funding system. Conclusions We explored how US NIH R01 DIRH grant recipients conceptualized and planned for the sustainability of their interventions. Our results identified the need to test, consolidate, and provide guidance on how to operationalize sustainability frameworks, and to develop strategies on how funders and researchers can advance sustainability research
Evaluation of the “Foundations in Knowledge Translation” training initiative: preparing end users to practice KT
Abstract Background Current knowledge translation (KT) training initiatives are primarily focused on preparing researchers to conduct KT research rather than on teaching KT practice to end users. Furthermore, training initiatives that focus on KT practice have not been rigorously evaluated and have focused on assessing short-term outcomes and participant satisfaction only. Thus, there is a need for longitudinal training evaluations that assess the sustainability of training outcomes and contextual factors that may influence outcomes. Methods We evaluated the KT training initiative “Foundations in KT” using a mixed-methods longitudinal design. “Foundations in KT” provided training in KT practice and included three tailored in-person workshops, coaching, and an online platform for training materials and knowledge exchange. Two cohorts were included in the study (62 participants, including 46 “Foundations in KT” participants from 16 project teams and 16 decision-maker partners). Participants completed self-report questionnaires, focus groups, and interviews at baseline and at 6, 12, 18, and 24 months after the first workshop. Results Participant-level outcomes include survey results which indicated that participants’ self-efficacy in evidence-based practice (F(1,8.9) = 23.7, p = 0.001, n = 45), KT activities (F(1,23.9) = 43.2, p < 0.001, n = 45), and using evidence to inform practice increased over time (F(1,11.0) = 6.0, p = 0.03, n = 45). Interviews and focus groups illustrated that participants’ understanding of and confidence in using KT increased from baseline to 24 months after the workshop. Interviews and focus groups suggested that the training initiative helped participants achieve their KT project objectives, plan their projects, and solve problems over time. Contextual factors include teams with high self-reported organizational capacity and commitment to implement at the start of their project had buy-in from upper management that resulted in secured funding and resources for their project. Training initiative outcomes include participants who applied the KT knowledge and skills they learned to other projects by sharing their knowledge informally with coworkers. Sustained spread of KT practice was observed with five teams at 24 months. Conclusions We completed a longitudinal evaluation of a KT training initiative. Positive participant outcomes were sustained until 24 months after the initial workshop. Given the emphasis on implementing evidence and the need to train implementers, these findings are promising for future KT training
Enhancing the uptake of systematic reviews of effects: what is the best format for health care managers and policy-makers? A mixed-methods study
Abstract Background Systematic reviews are infrequently used by health care managers (HCMs) and policy-makers (PMs) in decision-making. HCMs and PMs co-developed and tested novel systematic review of effects formats to increase their use. Methods A three-phased approach was used to evaluate the determinants to uptake of systematic reviews of effects and the usability of an innovative and a traditional systematic review of effects format. In phase 1, survey and interviews were conducted with HCMs and PMs in four Canadian provinces to determine perceptions of a traditional systematic review format. In phase 2, systematic review format prototypes were created by HCMs and PMs via Conceptboard©. In phase 3, prototypes underwent usability testing by HCMs and PMs. Results Two hundred two participants (80 HCMs, 122 PMs) completed the phase 1 survey. Respondents reported that inadequate format (Mdn = 4; IQR = 4; range = 1–7) and content (Mdn = 4; IQR = 3; range = 1–7) influenced their use of systematic reviews. Most respondents (76%; n = 136/180) reported they would be more likely to use systematic reviews if the format was modified. Findings from 11 interviews (5 HCMs, 6 PMs) revealed that participants preferred systematic reviews of effects that were easy to access and read and provided more information on intervention effectiveness and less information on review methodology. The mean System Usability Scale (SUS) score was 55.7 (standard deviation [SD] 17.2) for the traditional format; a SUS score < 68 is below average usability. In phase 2, 14 HCMs and 20 PMs co-created prototypes, one for HCMs and one for PMs. HCMs preferred a traditional information order (i.e., methods, study flow diagram, forest plots) whereas PMs preferred an alternative order (i.e., background and key messages on one page; methods and limitations on another). In phase 3, the prototypes underwent usability testing with 5 HCMs and 7 PMs, 11 out of 12 participants co-created the prototypes (mean SUS score 86 [SD 9.3]). Conclusions HCMs and PMs co-created prototypes for systematic review of effects formats based on their needs. The prototypes will be compared to a traditional format in a randomized trial