29 research outputs found

    Going beyond the individual: How state-level characteristics relate to HPV vaccine rates in the United States

    Get PDF
    Abstract Background The human papillomavirus (HPV) vaccine is an underutilized cancer control practice in the United States. Although individual contextual factors are known to impact HPV vaccine coverage rates, the impact of macro-level elements are still unclear. The aim of this analysis was to use HPV vaccination rates to explore the underuse of an evidence-based cancer control intervention and explore broader-level correlates influencing completion rates. Methods A comprehensive database was developed using individual-level date from the National Immunization Survey (NIS)-Teen (2016) and state-level data collected from publically available sources to analyze HPV vaccine completion. Multi-level logistic models were fit to identify significant correlates. Level-1 (individual) and level-2 (state) correlates were fitted to a random intercept model. Deviance and AIC assessed model fit and sampling weights were applied. Results The analysis included 20,495 adolescents from 50 U.S. states and the District of Columbia. Teen age, gender, race/ethnicity, and maternal education were significant individual predictors of HPV completion rates. Significant state-level predictors included sex education policy, religiosity, and HPV vaccine mandate. States with the lowest HPV coverage rates were found to be conservative and highly religious. Little variation in vaccine exemptions and enacted sex and abstinence education polices were observed between states with high and low HPV vaccine coverage suggesting various contextual and situational factors impact HPV vaccine completion rates. Conclusions Given that gender, religiosity, political ideology, and education policies are predictors of HPV vaccine completion, the interaction and underlying mechanism of these factors can be used to address the underutilization of the HPV vaccine

    Rugged landscapes: Complexity and implementation science

    Get PDF
    BACKGROUND: Mis-implementation-defined as failure to successfully implement and continue evidence-based programs-is widespread in public health practice. Yet the causes of this phenomenon are poorly understood. METHODS: We develop an agent-based computational model to explore how complexity hinders effective implementation. The model is adapted from the evolutionary biology literature and incorporates three distinct complexities faced in public health practice: dimensionality, ruggedness, and context-specificity. Agents in the model attempt to solve problems using one of three approaches-Plan-Do-Study-Act (PDSA), evidence-based interventions (EBIs), and evidence-based decision-making (EBDM). RESULTS: The model demonstrates that the most effective approach to implementation and quality improvement depends on the underlying nature of the problem. Rugged problems are best approached with a combination of PDSA and EBI. Context-specific problems are best approached with EBDM. CONCLUSIONS: The model\u27s results emphasize the importance of adapting one\u27s approach to the characteristics of the problem at hand. Evidence-based decision-making (EBDM), which combines evidence from multiple independent sources with on-the-ground local knowledge, is a particularly potent strategy for implementation and quality improvement

    Sustainability of evidence-based healthcare: Research agenda, methodological advances, and infrastructure support

    Get PDF
    BACKGROUND: Little is known about how well or under what conditions health innovations are sustained and their gains maintained once they are put into practice. Implementation science typically focuses on uptake by early adopters of one healthcare innovation at a time. The later-stage challenges of scaling up and sustaining evidence-supported interventions receive too little attention. This project identifies the challenges associated with sustainability research and generates recommendations for accelerating and strengthening this work. METHODS: A multi-method, multi-stage approach, was used: (1) identifying and recruiting experts in sustainability as participants, (2) conducting research on sustainability using concept mapping, (3) action planning during an intensive working conference of sustainability experts to expand the concept mapping quantitative results, and (4) consolidating results into a set of recommendations for research, methodological advances, and infrastructure building to advance understanding of sustainability. Participants comprised researchers, funders, and leaders in health, mental health, and public health with shared interest in the sustainability of evidence-based health care. RESULTS: Prompted to identify important issues for sustainability research, participants generated 91 distinct statements, for which a concept mapping process produced 11 conceptually distinct clusters. During the conference, participants built upon the concept mapping clusters to generate recommendations for sustainability research. The recommendations fell into three domains: (1) pursue high priority research questions as a unified agenda on sustainability; (2) advance methods for sustainability research; (3) advance infrastructure to support sustainability research. CONCLUSIONS: Implementation science needs to pursue later-stage translation research questions required for population impact. Priorities include conceptual consistency and operational clarity for measuring sustainability, developing evidence about the value of sustaining interventions over time, identifying correlates of sustainability along with strategies for sustaining evidence-supported interventions, advancing the theoretical base and research designs for sustainability research, and advancing the workforce capacity, research culture, and funding mechanisms for this important work

    Dissemination and implementation science training needs: Insights from practitioners and researchers

    Get PDF
    INTRODUCTION: Dissemination and implementation research training has great potential to improve the impact and reach of health-related research; however, research training needs from the end user perspective are unknown. This paper identifies and prioritizes dissemination and implementation research training needs. METHODS: A diverse sample of researchers, practitioners, and policymakers was invited to participate in Concept Mapping in 2014–2015. Phase 1 (Brainstorming) gathered participants' responses to the prompt: To improve the impact of research evidence in practice and policy settings, a skill in which researchers need more training is… The resulting statement list was edited and included subsequent phases. Phase 2 (Sorting) asked participants to sort each statement into conceptual piles. In Phase 3 (Rating), participants rated the difficulty and importance of incorporating each statement into a training curriculum. A multidisciplinary team synthesized and interpreted the results in 2015–2016. RESULTS: During Brainstorming, 60 researchers and 60 practitioners/policymakers contributed 274 unique statements. Twenty-nine researchers and 16 practitioners completed sorting and rating. Nine concept clusters were identified: Communicating Research Findings, Improve Practice Partnerships, Make Research More Relevant, Strengthen Communication Skills, Develop Research Methods and Measures, Consider and Enhance Fit, Build Capacity for Research, and Understand Multilevel Context. Though researchers and practitioners had high agreement about importance (r =0.93) and difficulty (r =0.80), ratings differed for several clusters (e.g., Build Capacity for Research). CONCLUSIONS: Including researcher and practitioner perspectives in competency development for dissemination and implementation research identifies skills and capacities needed to conduct and communicate contextualized, meaningful, and relevant research

    Use and awareness of the Community Guide in state and local health department chronic disease programs

    Get PDF
    INTRODUCTION: The Community Guide (Guide) is a user-friendly, systematic review system that provides information on evidence-based interventions (EBIs) in public health practice. Little is known about what predicts Guide awareness and use in state health departments (SHDs) and local health departments (LHDs). METHODS: We pooled data from 3 surveys (administered in 2016, 2017, and 2018) to employees in chronic disease programs at SHDs and LHDs. Participants (n = 1,039) represented all 50 states. The surveys asked about department practices and individual, organizational, and external factors related to decisions about EBIs. We used χ RESULTS: Eighty-one percent (n = 498) of SHD and 54% (n = 198) of LHD respondents reported their agency uses the Guide. Additionally, 13% of SHD participants reported not being aware of the Guide. Significant relationships were found between reporting using the Guide and academic collaboration, population size, rated importance of forming partnerships, and accreditation. CONCLUSION: Awareness and use of the Guide in LHD and SHD chronic disease programs is widespread. Awareness of the Guide can be vital to implementation practice, because it enhances implementation of EBI practices. However, awareness of the Guide alone is likely not enough for health departments to implement EBIs. Changes at the organizational level, including sharing information about the Guide and providing training on how to best use it, may increase its awareness and use

    Program adaptation by health departments

    Get PDF
    INTRODUCTION: The dissemination of evidence-based interventions (i.e., programs, practices, and policies) is a core function of US state health departments (SHDs). However, interventions are originally designed and tested with a specific population and context. Hence, adapting the intervention to meet the real-world circumstances and population\u27s needs can increase the likelihood of achieving the expected health outcomes for the target population from the implemented intervention. This study identified how SHD employees decide to adapt public health programs and what influences decisions on how to adapt them. MATERIALS AND METHODS: SHD employees ( RESULTS: Data, outcomes, and health department evaluations influenced decisions to adapt a program (pre-adaptation), and reasons to adapt a program included organizational and sociopolitical contextual factors. SHD middle-level managers, program managers and staff, and local agencies were involved in the decisions to adapt the programs. Finally, the goals for adapting a program included enhancing effectiveness/outcomes, reach and satisfaction with the program; funding; and partner engagement. After SHD employees decided to adapt a program, data and evidence guided the changes. Program staff and evaluators were engaged in the adaptation process. Program managers consulted partners to gather ideas on how best to adapt a program based on partners\u27 experiences implementing the program and obtaining community input. Lastly, program managers also received input on adapting content and context from coalition meetings and periodic technical assistance calls. DISCUSSION: The findings related to decisions to adapt public health programs provide practitioners with considerations for adapting them. Findings reaffirm the importance of promoting public health competencies in program evaluation and adaptation, as well as systematically documenting and evaluating the adaptation processes. In addition, the themes could be studied in future research as mechanisms, mediators, and moderators to implementation outcomes

    Leading the way: Competencies of leadership to prevent mis-implementation of public health programs

    Get PDF
    Public health agencies are increasingly concerned with ensuring that they are maximizing limited resources by delivering effective programs to enhance population-level health outcomes. Preventing mis-implementation (ending effective activities prematurely or continuing ineffective ones) is necessary to sustain public health efforts and resources needed to improve health and well-being. The purpose of this paper is to identify the important qualities of leadership in preventing mis-implementation of public health programs. In 2019, 45 state health department chronic disease employees were interviewed via phone and audio-recorded, and the conversations were transcribed verbatim. Thematic analysis focused on items related to mis-implementation and the manners in which leadership were involved in continuing ineffective programs. Final themes were based on a Public Health Leadership Competency Framework. The following themes emerged from their interviews regarding the important leadership competencies to prevent mis-implementation: \u27(1) leadership and communication; (2) collaborative leadership (3) leadership to adapt programs; (4) leadership and organizational learning and development; and (5) political leadership\u27. This first of its kind study showed the close interrelationship between mis-implementation and leadership. Increased attention to public health leader competencies might help to reduce mis-implementation in public health practice and lead to more effective and efficient use of limited resources

    It\u27s good to feel like you\u27re doing something : A qualitative study examining state health department employees\u27 views on why ineffective programs continue to be implemented in the USA

    Get PDF
    BACKGROUND: Mis-implementation, the inappropriate continuation of programs or policies that are not evidence-based or the inappropriate termination of evidence-based programs and policies, can lead to the inefficient use of scarce resources in public health agencies and decrease the ability of these agencies to deliver effective programs and improve population health. Little is known about why mis-implementation occurs, which is needed to understand how to address it. This study sought to understand the state health department practitioners\u27 perspectives about what makes programs ineffective and the reasons why ineffective programs continue. METHODS: Eight state health departments (SHDs) were selected to participate in telephone-administered qualitative interviews about decision-making around ending or continuing programs. States were selected based on geographic representation and on their level of mis-implementation (low and high) categorized from our previous national survey. Forty-four SHD chronic disease staff participated in interviews, which were audio-recorded and transcribed verbatim. Transcripts were consensus coded, and themes were identified and summarized. This paper presents two sets of themes, related to (1) what makes a program ineffective and (2) why ineffective programs continue to be implemented according to SHD staff. RESULTS: Participants considered programs ineffective if they were not evidence-based or if they did not fit well within the population; could not be implemented well due to program restraints or a lack of staff time and resources; did not reach those who could most benefit from the program; or did not show the expected program outcomes through evaluation. Practitioners described several reasons why ineffective programs continued to be implemented, including concerns about damaging the relationships with partner organizations, the presence of program champions, agency capacity, and funding restrictions. CONCLUSIONS: The continued implementation of ineffective programs occurs due to a number of interrelated organizational, relational, human resources, and economic factors. Efforts should focus on preventing mis-implementation since it limits public health agencies\u27 ability to conduct evidence-based public health, implement evidence-based programs effectively, and reduce the high burden of chronic diseases. The use of evidence-based decision-making in public health agencies and supporting adaptation of programs to improve their fit may prevent mis-implementation. Future work should identify effective strategies to reduce mis-implementation, which can optimize public health practice and improve population health

    Training scholars in dissemination and implementation research for cancer prevention and control: A mentored approach

    Get PDF
    Abstract Background As the field of D&I (dissemination and implementation) science grows to meet the need for more effective and timely applications of research findings in routine practice, the demand for formalized training programs has increased concurrently. The Mentored Training for Dissemination and Implementation Research in Cancer (MT-DIRC) Program aims to build capacity in the cancer control D&I research workforce, especially among early career researchers. This paper outlines the various components of the program and reports results of systematic evaluations to ascertain its effectiveness. Methods Essential features of the program include selection of early career fellows or more experienced investigators with a focus relevant to cancer control transitioning to a D&I research focus, a 5-day intensive training institute, ongoing peer and senior mentoring, mentored planning and work on a D&I research proposal or project, limited pilot funding, and training and ongoing improvement activities for mentors. The core faculty and staff members of the MT-DIRC program gathered baseline and ongoing evaluation data regarding D&I skill acquisition and mentoring competency through participant surveys and analyzed it by iterative collective reflection. Results A majority (79%) of fellows are female, assistant professors (55%); 59% are in allied health disciplines, and 48% focus on cancer prevention research. Forty-three D&I research competencies were assessed; all improved from baseline to 6 and 18 months. These effects were apparent across beginner, intermediate, and advanced initial D&I competency levels and across the competency domains. Mentoring competency was rated very highly by the fellows––higher than rated by the mentors themselves. The importance of different mentoring activities, as rated by the fellows, was generally congruent with their satisfaction with the activities, with the exception of relatively greater satisfaction with the degree of emotional support and relatively lower satisfaction for skill building and opportunity initially. Conclusions These first years of MT-DIRC demonstrated the program’s ability to attract, engage, and improve fellows’ competencies and skills and implement a multicomponent mentoring program that was well received. This account of the program can serve as a basis for potential replication and evolution of this model in training future D&I science researchers

    Developing educational competencies for dissemination and implementation research training programs: An exploratory analysis using card sorts

    Get PDF
    Abstract Background With demand increasing for dissemination and implementation (D&I) training programs in the USA and other countries, more structured, competency-based, and tested curricula are needed to guide training programs. There are many benefits to the use of competencies in practice-based education such as the establishment of rigorous standards as well as providing an additional metrics for development and growth. As the first aim of a D&I training grant, an exploratory study was conducted to establish a new set of D&I competencies to guide training in D&I research. Methods Based upon existing D&I training literature, the leadership team compiled an initial list of competencies. The research team then engaged 16 additional colleagues in the area of D&I science to provide suggestions to the initial list. The competency list was then additionally narrowed to 43 unique competencies following feedback elicited from these D&I researchers. Three hundred additional D&I researchers were then invited via email to complete a card sort in which the list of competencies were sorted into three categories of experience levels. Participants had previous first-hand experience with D&I or knowledge translation training programs in the past. Participants reported their self-identified D&I expertise level as well as the country in which their home institution is located. A mean score was calculated for each competency based on their experience level categorization. From these mean scores, beginner-, intermediate-, and advanced-level tertiles were created for the competencies. Results The card sort request achieved a 41 % response rate (n = 124). The list of 43 competencies was organized into four broad domains and sorted based on their experience level score. Eleven competencies were classified into the “Beginner” category, 27 into “Intermediate,” and 5 into “Advanced.” Conclusions Education and training developers can use this competency list to formalize future trainings in D&I research, create more evidence-informed curricula, and enable overall capacity building and accompanying metrics in the field of D&I training and research.http://deepblue.lib.umich.edu/bitstream/2027.42/113065/1/13012_2015_Article_304.pd
    corecore