29 research outputs found

    The World Starts With Me: using intervention mapping for the systematic adaptation and transfer of school-based sexuality education from Uganda to Indonesia

    Get PDF
    Evidence-based health promotion programmes, including HIV/AIDS prevention and sexuality education programmes, are often transferred to other cultures, priority groups and implementation settings. Challenges in this process include the identification of retaining core elements that relate to the programme’s effectiveness while making changes that enhances acceptance in the new context and for the new priority group. This paper describes the use of a systematic approach to programme adaptation using a case study as an example. Intervention Mapping, a protocol for the development of evidence-based behaviour change interventions, was used to adapt the comprehensive school-based sexuality education programme ‘The World Starts With Me’. The programme was developed for a priority population in Uganda and adapted to a programme for Indonesian secondary school students. The approach helped to systematically address the complexity and challenges of programme adaptation and to find a balance between preservation of essential programme elements (i.e. logic models) that may be crucial to the programme’s effectiveness, including key objectives and theoretical behaviour change methods, and the adaptation of the programme to be acceptable to the new priority group and the programme implementers

    Implementation fidelity of a nurse-led falls prevention program in acute hospitals during the 6-PACK trial

    Get PDF
    Background: When tested in a randomized controlled trial (RCT) of 31,411 patients, the nurse-led 6-PACK falls prevention program did not reduce falls. Poor implementation fidelity (i.e., program not implemented as intended) may explain this result. Despite repeated calls for the examination of implementation fidelity as an essential component of evaluating interventions designed to improve the delivery of care, it has been neglected in prior falls prevention studies. This study examined implementation fidelity of the 6-PACK program during a large multi-site RCT. Methods: Based on the 6-PACK implementation framework and intervention description, implementation fidelity was examined by quantifying adherence to program components and organizational support. Adherence indicators were: 1) falls-risk tool completion; and for patients classified as high-risk, provision of 2) a ‘Falls alert’ sign; and 3) at least one additional 6-PACK intervention. Organizational support indicators were: 1) provision of resources (executive sponsorship, site clinical leaders and equipment); 2) implementation activities (modification of patient care plans; training; implementation tailoring; audits, reminders and feedback; and provision of data); and 3) program acceptability. Data were collected from daily bedside observation, medical records, resource utilization diaries and nurse surveys. Results: All seven intervention components were delivered on the 12 intervention wards. Program adherence data were collected from 103,398 observations and medical record audits. The falls-risk tool was completed each day for 75% of patients. Of the 38% of patients classified as high-risk, 79% had a ‘Falls alert’ sign and 63% were provided with at least one additional 6-PACK intervention, as recommended. All hospitals provided the recommended resources and undertook the nine outlined program implementation activities. Most of the nurses surveyed considered program components important for falls prevention. Conclusions: While implementation fidelity was variable across wards, overall it was found to be acceptable during the RCT. Implementation failure is unlikely to be a key factor for the observed lack of program effectiveness in the 6-PACK trial. Trial registration: The 6-PACK cluster RCT is registered with the Australian New Zealand Clinical Trials Registry, number ACTRN12611000332921 (29 March 2011)

    Constructing “Packages” of Evidence-Based Programs to Prevent Youth Violence: Processes and Illustrative Examples From the CDC’s Youth Violence Prevention Centers

    Get PDF
    This paper describes the strategic efforts of six National Centers of Excellence in Youth Violence Prevention (YVPC), funded by the U.S. Centers for Disease Control and Prevention, to work in partnership with local communities to create comprehensive evidence-based program packages to prevent youth violence. Key components of a comprehensive evidence-based approach are defined and examples are provided from a variety of community settings (rural and urban) across the nation that illustrate attempts to respond to the unique needs of the communities while maintaining a focus on evidence-based programming and practices. At each YVPC site, the process of selecting prevention and intervention programs addressed the following factors: (1) community capacity, (2) researcher and community roles in selecting programs, (3) use of data in decision-making related to program selection, and (4) reach, resources, and dosage. We describe systemic barriers to these efforts, lessons learned, and opportunities for policy and practice. Although adopting an evidence-based comprehensive approach requires significant upfront resources and investment, it offers great potential for preventing youth violence and promoting the successful development of children, families and communities

    Promoting learning from null or negative results in prevention science trials

    Get PDF
    This is the final version. Available on open access from Springer via the DOI in this recordThere can be a tendency for investigators to disregard or explain away null or negative results in prevention science trials. Examples include not publicizing findings, conducting spurious subgroup analyses, or attributing the outcome post hoc to real or perceived weaknesses in trial design or intervention implementation. This is unhelpful for several reasons, not least that it skews the evidence base, contributes to research "waste", undermines respect for science, and stifles creativity in intervention development. In this paper, we identify possible policy and practice responses when interventions have null (ineffective) or negative (harmful) results, and argue that these are influenced by: the intervention itself (e.g., stage of gestation, perceived importance); trial design, conduct, and results (e.g., pattern of null/negative effects, internal and external validity); context (e.g., wider evidence base, state of policy); and individual perspectives and interests (e.g., stake in the intervention). We advance several strategies to promote more informative null or negative effect trials and enable learning from such results, focusing on changes to culture, process, intervention design, trial design, and environment.National Institute for Health Research (NIHR

    Addressing Core Challenges for the Next Generation of Type 2 Translation Research and Systems: The Translation Science to Population Impact (TSci Impact) Framework

    Get PDF
    corecore