580 research outputs found
Efficacious, effective, and embedded interventions: Implementation research in infectious disease control
Background: Research in infectious disease control is heavily skewed towards high end
technology; development of new drugs, vaccines and clinical interventions. Oft ignored, is the
evidence to inform the best strategies that ensure the embedding of interventions into health
systems and amongst populations. In this paper we undertake an analysis of the challenge in the
development of research for the sustainable implementation of disease control interventions.
Results: We highlight the fundamental differences between the research paradigms associated
with the development of technologies and interventions for disease control on the one hand and the research paradigms required for enhancing the sustainable uptake of those very same
interventions within the communities on the other. We provide a definition for implementation
research in an attempt to underscore its critical role and explore the multidisciplinary science
needed to address the challenges in disease control.
Conclusion: The greatest value for money in health research lies in the sustainable and effective implementation of already proven, efficacious solutions. The development of implementation research that can help provide some solutions on how this can be achieved is sorely needed
Determinants of participation in a web-based health risk assessment and consequences for health promotion programs
Background: The health risk assessment (HRA) is a type of health promotion program frequently offered at the workplace. Insight into the underlying determinants of participation is needed to evaluate and implement these interventions. Objective: To analyze whether individual characteristics including demographics, health behavior, self-rated health, and work-related factors are associated with participation and nonparticipation in a Web-based HRA. Methods: Determinants of participation and nonparticipation were investigated in a cross-sectional study among individuals employed at five Dutch organizations. Multivariate logistic regression was performed to identify determinants of participation and nonparticipation in the HRA after controlling for organization and all other variables. Results: Of the 8431 employees who were invited, 31.9% (2686/8431) enrolled in the HRA. The online questionnaire was completed by 27.2% (1564/5745) of the nonparticipants. Determinants of participation were some periods of stress at home or work in the preceding year (OR 1.62, 95% CI 1.08-2.42), a decreasing number of weekdays on which at least 30 minutes were spent on moderate to vigorous physical activity (ORdayPA0.84, 95% CI 0.79-0.90), and increasing alcohol consumption. Determinants of nonparticipation were less-than-positive self-rated health (poor/very poor vs very good, OR 0.25, 95% CI 0.08-0.81) and tobacco use (at least weekly vs none, OR 0.65, 95% CI 0.46-0.90). Conclusions: This study showed that with regard to isolated health behaviors (insufficient physical activity, excess alcohol consumption, and stress), those who could benefit most from the HRA were more likely to participate. However, tobacco users and those who rate
Process evaluation for complex interventions in primary care: understanding trials using the normalization process model
Background: the Normalization Process Model is a conceptual tool intended to assist in understanding the factors that affect implementation processes in clinical trials and other evaluations of complex interventions. It focuses on the ways that the implementation of complex interventions is shaped by problems of workability and integration.Method: in this paper the model is applied to two different complex trials: (i) the delivery of problem solving therapies for psychosocial distress, and (ii) the delivery of nurse-led clinics for heart failure treatment in primary care.Results: application of the model shows how process evaluations need to focus on more than the immediate contexts in which trial outcomes are generated. Problems relating to intervention workability and integration also need to be understood. The model may be used effectively to explain the implementation process in trials of complex interventions.Conclusion: the model invites evaluators to attend equally to considering how a complex intervention interacts with existing patterns of service organization, professional practice, and professional-patient interaction. The justification for this may be found in the abundance of reports of clinical effectiveness for interventions that have little hope of being implemented in real healthcare setting
Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science
Abstract Background Many interventions found to be effective in health services research studies fail to translate into meaningful patient care outcomes across multiple contexts. Health services researchers recognize the need to evaluate not only summative outcomes but also formative outcomes to assess the extent to which implementation is effective in a specific setting, prolongs sustainability, and promotes dissemination into other settings. Many implementation theories have been published to help promote effective implementation. However, they overlap considerably in the constructs included in individual theories, and a comparison of theories reveals that each is missing important constructs included in other theories. In addition, terminology and definitions are not consistent across theories. We describe the Consolidated Framework For Implementation Research (CFIR) that offers an overarching typology to promote implementation theory development and verification about what works where and why across multiple contexts. Methods We used a snowball sampling approach to identify published theories that were evaluated to identify constructs based on strength of conceptual or empirical support for influence on implementation, consistency in definitions, alignment with our own findings, and potential for measurement. We combined constructs across published theories that had different labels but were redundant or overlapping in definition, and we parsed apart constructs that conflated underlying concepts. Results The CFIR is composed of five major domains: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. Eight constructs were identified related to the intervention (e.g., evidence strength and quality), four constructs were identified related to outer setting (e.g., patient needs and resources), 12 constructs were identified related to inner setting (e.g., culture, leadership engagement), five constructs were identified related to individual characteristics, and eight constructs were identified related to process (e.g., plan, evaluate, and reflect). We present explicit definitions for each construct. Conclusion The CFIR provides a pragmatic structure for approaching complex, interacting, multi-level, and transient states of constructs in the real world by embracing, consolidating, and unifying key constructs from published implementation theories. It can be used to guide formative evaluations and build the implementation knowledge base across multiple studies and settings.http://deepblue.lib.umich.edu/bitstream/2027.42/78272/1/1748-5908-4-50.xmlhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/2/1748-5908-4-50-S1.PDFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/3/1748-5908-4-50-S3.PDFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/4/1748-5908-4-50-S4.PDFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/5/1748-5908-4-50.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/6/1748-5908-4-50-S2.PDFPeer Reviewe
Development and pilot of an internationally standardized measure of cardiovascular risk management in European primary care
Contains fulltext :
97806.pdf (publisher's version ) (Open Access)BACKGROUND: Primary care can play an important role in providing cardiovascular risk management in patients with established Cardiovascular Diseases (CVD), patients with a known high risk of developing CVD, and potentially for individuals with a low risk of developing CVD, but who have unhealthy lifestyles. To describe and compare cardiovascular risk management, internationally valid quality indicators and standardized measures are needed. As part of a large project in 9 European countries (EPA-Cardio), we have developed and tested a set of standardized measures, linked to previously developed quality indicators. METHODS: A structured stepwise procedure was followed to develop measures. First, the research team allocated 106 validated quality indicators to one of the three target populations (established CVD, at high risk, at low risk) and to different data-collection methods (data abstraction from the medical records, a patient survey, an interview with lead practice GP/a practice survey). Secondly, we selected a number of other validated measures to enrich the assessment. A pilot study was performed to test the feasibility. Finally, we revised the measures based on the findings. RESULTS: The EPA-Cardio measures consisted of abstraction forms from the medical-records data of established Coronary Heart Disease (CHD)-patients--and high-risk groups, a patient questionnaire for each of the 3 groups, an interview questionnaire for the lead GP and a questionnaire for practice teams. The measures were feasible and accepted by general practices from different countries. CONCLUSIONS: An internationally standardized measure of cardiovascular risk management, linked to validated quality indicators and tested for feasibility in general practice, is now available. Careful development and pilot testing of the measures are crucial in international studies of quality of healthcare
Observational measure of implementation progress in community based settings: The Stages of implementation completion (SIC)
<p>Abstract</p> <p>Background</p> <p>An increasingly large body of research is focused on designing and testing strategies to improve knowledge about how to embed evidence-based programs (EBP) into community settings. Development of strategies for overcoming barriers and increasing the effectiveness and pace of implementation is a high priority. Yet, there are few research tools that measure the implementation process itself. The Stages of Implementation Completion (SIC) is an observation-based measure that is used to track the time to achievement of key implementation milestones in an EBP being implemented in 51 counties in 53 sites (two counties have two sites) in two states in the United States.</p> <p>Methods</p> <p>The SIC was developed in the context of a randomized trial comparing the effectiveness of two implementation strategies: community development teams (experimental condition) and individualized implementation (control condition). Fifty-one counties were randomized to experimental or control conditions for implementation of multidimensional treatment foster care (MTFC), an alternative to group/residential care placement for children and adolescents. Progress through eight implementation stages was tracked by noting dates of completion of specific activities in each stage. Activities were tailored to the strategies for implementing the specific EBP.</p> <p>Results</p> <p>Preliminary data showed that several counties ceased progress during pre-implementation and that there was a high degree of variability among sites in the duration scores per stage and on the proportion of activities that were completed in each stage. Progress through activities and stages for three example counties is shown.</p> <p>Conclusions</p> <p>By assessing the attainment time of each stage and the proportion of activities completed, the SIC measure can be used to track and compare the effectiveness of various implementation strategies. Data from the SIC will provide sites with relevant information on the time and resources needed to implement MTFC during various phases of implementation. With some modifications, the SIC could be appropriate for use in evaluating implementation strategies in head-to-head randomized implementation trials and as a monitoring tool for rolling out other EBPs.</p
Designing high-quality implementation research: development, application, feasibility and preliminary evaluation of the implementation science research development (ImpRes) tool and guide
Background: Designing implementation research can be a complex and daunting task, especially for applied health researchers who have not received specialist training in implementation science. We developed the Implementation Science Research Development (ImpRes) tool and supplementary guide to address this challenge and provide researchers with a systematic approach to designing implementation research. Methods: A multi-method and multi-stage approach was employed. An international, multidisciplinary expert panel engaged in an iterative brainstorming and consensus-building process to generate core domains of the ImpRes tool, representing core implementation science principles and concepts that researchers should consider when designing implementation research. Simultaneously, an iterative process of reviewing the literature and expert input informed the development and content of the tool. Once consensus had been reached, specialist expert input was sought on involving and engaging patients/service users; and economic evaluation. ImpRes was then applied to 15 implementation and improvement science projects across the National Institute of Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care (CLAHRC) South London, a research organisation in London, UK. Researchers who applied the ImpRes tool completed an 11-item questionnaire evaluating its structure, content and usefulness. Results: Consensus was reached on ten implementation science domains to be considered when designing implementation research. These include implementation theories, frameworks and models, determinants of implementation, implementation strategies, implementation outcomes and unintended consequences. Researchers who used the ImpRes tool found it useful for identifying project areas where implementation science is lacking (median 5/5, IQR 4–5) and for improving the quality of implementation research (median 4/5, IQR 4–5) and agreed that it contained the key components that should be considered when designing implementation research (median 4/5, IQR 4–4). Qualitative feedback from researchers who applied the ImpRes tool indicated that a supplementary guide was needed to facilitate use of the tool. Conclusions: We have developed a feasible and acceptable tool, and supplementary guide, to facilitate consideration and incorporation of core principles and concepts of implementation science in applied health implementation research. Future research is needed to establish whether application of the tool and guide has an effect on the quality of implementation research
Asthma self-assessment in a Medicaid population
<p>Abstract</p> <p>Background</p> <p>Self-assessment of symptoms by patients with chronic conditions is an important element of disease management. A recent study in a commercially-insured population found that patients who received automated telephone calls for asthma self-assessment felt they benefitted from the calls. Few studies have evaluated the effectiveness of disease self-assessment in Medicaid populations. The goals of this study were to: (1) assess the feasibility of asthma self-assessment in a population predominantly insured by Medicaid, (2) study whether adding a gift card incentive increased completion of the self-assessment survey, and (3) evaluate how the self-assessment affected processes and outcomes of care.</p> <p>Methods</p> <p>We studied adults and children aged 4 years and older who were insured by a Medicaid-focused managed care organization (MCO) in a pre- and post-intervention study. During the pre-incentive period, patients with computerized utilization data that met specific criteria for problematic asthma control were mailed the Asthma Control Test (ACT), a self-assessment survey, and asked to return it to the MCO. During the intervention period, patients were offered a $20 gift card for returning the completed ACT to the MCO. To evaluate clinical outcomes, we used computerized claims data to assess the number of hospitalization visits and emergency department visits experienced in the 3 months after receiving the ACT. To evaluate whether the self-management intervention improved processes of care, we conducted telephone interviews with patients who returned or did not return the ACT by mail.</p> <p>Results</p> <p>During the pre-incentive period, 1183 patients were identified as having problems with asthma control; 25 (2.0%) of these returned the ACT to the MCO. In contrast, during the incentive period, 1612 patients were identified as having problems with asthma control and 87 (5.4%) of these returned the ACT to the MCO (p < 0.0001). Of all 95 ACTs that were returned, 87% had a score of 19 or less, which suggested poor asthma control.</p> <p>During the 3 months after they received the ACT, patients who completed it had similar numbers of outpatient visits, emergency department visits, and hospitalizations for asthma as patients who did not complete the ACT. We completed interviews with 95 patients, including 28 who had completed the ACT and 67 who had not. Based on an ACT administered at the time of the interview, patients who had previously returned the ACT to the MCO had asthma control similar to those who had not (mean scores of 14.2 vs. 14.6, p = 0.70). Patients had similar rates of contacting their providers within the past 2 months whether they had completed the mailed ACT or not (71% vs. 76%, p = 0.57).</p> <p>Conclusion</p> <p>Mailing asthma self-assessment surveys to patients with poorly controlled asthma was not associated with better asthma-associated outcomes or processes of care in the Medicaid population studied. Adding a gift card incentive did not meaningfully increase response rates. Asthma disease management programs for Medicaid populations will most likely need to involve alternative strategies for engaging patients and their providers in managing their conditions.</p
Measuring persistence of implementation: QUERI Series
As more quality improvement programs are implemented to achieve gains in performance, the need to evaluate their lasting effects has become increasingly evident. However, such long-term follow-up evaluations are scarce in healthcare implementation science, being largely relegated to the "need for further research" section of most project write-ups. This article explores the variety of conceptualizations of implementation sustainability, as well as behavioral and organizational factors that influence the maintenance of gains. It highlights the finer points of design considerations and draws on our own experiences with measuring sustainability, framed within the rich theoretical and empirical contributions of others. In addition, recommendations are made for designing sustainability analyses
- …