519,986 research outputs found

    Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science

    Get PDF
    Abstract Background Many interventions found to be effective in health services research studies fail to translate into meaningful patient care outcomes across multiple contexts. Health services researchers recognize the need to evaluate not only summative outcomes but also formative outcomes to assess the extent to which implementation is effective in a specific setting, prolongs sustainability, and promotes dissemination into other settings. Many implementation theories have been published to help promote effective implementation. However, they overlap considerably in the constructs included in individual theories, and a comparison of theories reveals that each is missing important constructs included in other theories. In addition, terminology and definitions are not consistent across theories. We describe the Consolidated Framework For Implementation Research (CFIR) that offers an overarching typology to promote implementation theory development and verification about what works where and why across multiple contexts. Methods We used a snowball sampling approach to identify published theories that were evaluated to identify constructs based on strength of conceptual or empirical support for influence on implementation, consistency in definitions, alignment with our own findings, and potential for measurement. We combined constructs across published theories that had different labels but were redundant or overlapping in definition, and we parsed apart constructs that conflated underlying concepts. Results The CFIR is composed of five major domains: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. Eight constructs were identified related to the intervention (e.g., evidence strength and quality), four constructs were identified related to outer setting (e.g., patient needs and resources), 12 constructs were identified related to inner setting (e.g., culture, leadership engagement), five constructs were identified related to individual characteristics, and eight constructs were identified related to process (e.g., plan, evaluate, and reflect). We present explicit definitions for each construct. Conclusion The CFIR provides a pragmatic structure for approaching complex, interacting, multi-level, and transient states of constructs in the real world by embracing, consolidating, and unifying key constructs from published implementation theories. It can be used to guide formative evaluations and build the implementation knowledge base across multiple studies and settings.http://deepblue.lib.umich.edu/bitstream/2027.42/78272/1/1748-5908-4-50.xmlhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/2/1748-5908-4-50-S1.PDFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/3/1748-5908-4-50-S3.PDFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/4/1748-5908-4-50-S4.PDFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/5/1748-5908-4-50.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/6/1748-5908-4-50-S2.PDFPeer Reviewe

    A Conceptual Framework for Studying the Sources of Variation in Program Effects

    Get PDF
    Evaluations of public programs in many fields reveal that (1) different types of programs (or different versions of the same program) vary in their effectiveness, (2) a program that is effective for one group of people might not be effective for other groups of people, and (3) a program that is effective in one set of circumstances may not be effective in other circumstances. This paper presents a conceptual framework for research on such variation in program effects and the sources of this variation. The framework is intended to help researchers -- both those who focus mainly on studying program implementation and those who focus mainly on estimating program effects -- see how their respective pieces fit together in a way that helps to identify factors that explain variation in program effects and thereby support more systematic data collection on these factors. The ultimate goal of the framework is to enable researchers to offer better guidance to policymakers and program operators on the conditions and practices that are associated with larger and more positive effects

    The organizational implications of medical imaging in the context of Malaysian hospitals

    Get PDF
    This research investigated the implementation and use of medical imaging in the context of Malaysian hospitals. In this report medical imaging refers to PACS, RIS/HIS and imaging modalities which are linked through a computer network. The study examined how the internal context of a hospital and its external context together influenced the implementation of medical imaging, and how this in turn shaped organizational roles and relationships within the hospital itself. It further investigated how the implementation of the technology in one hospital affected its implementation in another hospital. The research used systems theory as the theoretical framework for the study. Methodologically, the study used a case-based approach and multiple methods to obtain data. The case studies included two hospital-based radiology departments in Malaysia. The outcomes of the research suggest that the implementation of medical imaging in community hospitals is shaped by the external context particularly the role played by the Ministry of Health. Furthermore, influences from both the internal and external contexts have a substantial impact on the process of implementing medical imaging and the extent of the benefits that the organization can gain. In the context of roles and social relationships, the findings revealed that the routine use of medical imaging has substantially affected radiographers’ roles, and the social relationships between non clinical personnel and clinicians. This study found no change in the relationship between radiographers and radiologists. Finally, the approaches to implementation taken in the hospitals studied were found to influence those taken by other hospitals. Overall, this study makes three important contributions. Firstly, it extends Barley’s (1986, 1990) research by explicitly demonstrating that the organization’s internal and external contexts together shape the implementation and use of technology, that the processes of implementing and using technology impact upon roles, relationships and networks and that a role-based approach alone is inadequate to examine the outcomes of deploying an advanced technology. Secondly, this study contends that scalability of technology in the context of developing countries is not necessarily linear. Finally, this study offers practical contributions that can benefit healthcare organizations in Malaysia

    IT service management: towards a contingency theory of performance measurement

    Get PDF
    Information Technology Service Management (ITSM) focuses on IT service creation, design, delivery and maintenance. Measurement is one of the basic underlying elements of service science and this paper contributes to service science by focussing on the selection of performance metrics for ITSM. Contingency theory is used to provide a theoretical foundation for the study. Content analysis of interviews of ITSM managers at six organisations revealed that selection of metrics is influenced by a discrete set of factors. Three categories of factors were identified: external environment, parent organisationand IS organisation. For individual cases, selection of metrics was contingent on factors such as organisation culture, management philosophy and perspectives, legislation, industry sector, and customers, although a common set of four factors influenced selection of metrics across all organisations. A strong link was identified between the use of a corporate performance framework and clearly articulated ITSM metrics

    Achieving change in primary care—causes of the evidence to practice gap : systematic reviews of reviews

    Get PDF
    Acknowledgements The Evidence to Practice Project (SPCR FR4 project number: 122) is funded by the National Institute of Health Research (NIHR) School for Primary Care Research (SPCR). KD is part-funded by the National Institute for Health Research (NIHR) Collaborations for Leadership in Applied Research and Care West Midlands and by a Knowledge Mobilisation Research Fellowship (KMRF-2014-03-002) from the NIHR. This paper presents independent research funded by the National Institute of Health Research (NIHR). The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. Funding This study is funded by the National Institute for Health Research (NIHR) School for Primary Care Research (SPCR).Peer reviewedPublisher PD

    Mixed-method study of a conceptual model of evidence-based intervention sustainment across multiple public-sector service settings.

    Get PDF
    BackgroundThis study examines sustainment of an EBI implemented in 11 United States service systems across two states, and delivered in 87 counties. The aims are to 1) determine the impact of state and county policies and contracting on EBI provision and sustainment; 2) investigate the role of public, private, and academic relationships and collaboration in long-term EBI sustainment; 3) assess organizational and provider factors that affect EBI reach/penetration, fidelity, and organizational sustainment climate; and 4) integrate findings through a collaborative process involving the investigative team, consultants, and system and community-based organization (CBO) stakeholders in order to further develop and refine a conceptual model of sustainment to guide future research and provide a resource for service systems to prepare for sustainment as the ultimate goal of the implementation process.MethodsA mixed-method prospective and retrospective design will be used. Semi-structured individual and group interviews will be used to collect information regarding influences on EBI sustainment including policies, attitudes, and practices; organizational factors and external policies affecting model implementation; involvement of or collaboration with other stakeholders; and outer- and inner-contextual supports that facilitate ongoing EBI sustainment. Document review (e.g., legislation, executive orders, regulations, monitoring data, annual reports, agendas and meeting minutes) will be used to examine the roles of state, county, and local policies in EBI sustainment. Quantitative measures will be collected via administrative data and web surveys to assess EBI reach/penetration, staff turnover, EBI model fidelity, organizational culture and climate, work attitudes, implementation leadership, sustainment climate, attitudes toward EBIs, program sustainment, and level of institutionalization. Hierarchical linear modeling will be used for quantitative analyses. Qualitative analyses will be tailored to each of the qualitative methods (e.g., document review, interviews). Qualitative and quantitative approaches will be integrated through an inclusive process that values stakeholder perspectives.DiscussionThe study of sustainment is critical to capitalizing on and benefiting from the time and fiscal investments in EBI implementation. Sustainment is also critical to realizing broad public health impact of EBI implementation. The present study takes a comprehensive mixed-method approach to understanding sustainment and refining a conceptual model of sustainment

    Early Learning Innovation Fund Evaluation Final Report

    Get PDF
    This is a formative evaluation of the Hewlett Foundation's Early Learning Innovation Fund that began in 2011 as part of the Quality Education in Developing Countries (QEDC) initiative.  The Fund has four overarching objectives, which are to: promote promising approaches to improve children's learning; strengthen the capacity of organizations implementing those approaches; strengthen those organizations' networks and ownership; and grow 20 percent of implementing organizations into significant players in the education sector. The Fund's original design was to create a "pipeline" of innovative approaches to improve learning outcomes, with the assumption that donors and partners would adopt the most successful ones. A defining feature of the Fund was that it delivered assistance through two intermediary support organizations (ISOs), rather than providing funds directly to implementing organizations. Through an open solicitation process, the Hewlett Foundation selected Firelight Foundation and TrustAfrica to manage the Fund. Firelight Foundation, based in California, was founded in 1999 with a mission to channel resources to community-based organizations (CBOs) working to improve the lives of vulnerable children and families in Africa. It supports 12 implementing organizations in Tanzania for the Fund. TrustAfrica, based in Dakar, Senegal, is a convener that seeks to strengthen African-led initiatives addressing some of the continent's most difficult challenges. The Fund was its first experience working specifically with early learning and childhood development organizations. Under the Fund, it supported 16 such organizations: one in Mali and five each in Senegal, Uganda and Kenya. At the end of 2014, the Hewlett Foundation commissioned Management Systems International (MSI) to conduct a mid-term evaluation assessing the implementation of the Fund exploring the extent to which it achieved intended outcomes and any factors that had limited or enabled its achievements. It analyzed the support that the ISOs provided to their implementing organizations, with specific focus on monitoring and evaluation (M&E). The evaluation included an audit of the implementing organizations' M&E systems and a review of the feasibility of compiling data collected to support an impact evaluation. Finally, the Foundation and the ISOs hoped that this evaluation would reveal the most promising innovations and inform planning for Phase II of the Fund. The evaluation findings sought to inform the Hewlett Foundation and other donors interested in supporting intermediary grant-makers, early learning innovations and the expansion of innovations. TrustAfrica and Firelight Foundation provided input to the evaluation's scope of work. Mid-term evaluation reports for each ISO provided findings about their management of the Fund's Phase I and recommendations for Phase II. This final evaluation report will inform donors, ISOs and other implementing organizations about the best approaches to support promising early learning innovations and their expansion. The full report outlines findings common across both ISOs' experience and includes recommendations in four key areas: adequate time; appropriate capacity building; advocacy and scaling up; and evaluating and documenting innovations. Overall, both Firelight Foundation and TrustAfrica supported a number of effective innovations working through committed and largely competent implementing organizations. The program's open-ended nature avoided being prescriptive in its approach, but based on the lessons learned in this evaluation and the broader literature, the Hewlett Foundation and other donors could have offered more guidance to ISOs to avoid the need to continually relearn some lessons. For example, over the evaluation period, it became increasingly evident that the current context demands more focused advance planning to measure impact on beneficiaries and other stakeholders and a more concrete approach to promoting and resourcing potential scale-up. The main findings from the evaluation and recommendations are summarized here

    Organic Action Plans. Development, implementation and evaluation. A resource manual for the organic food and farming sector

    Get PDF
    In 2004, the European Action Plan for Organic Food and Farming was launched. Many European countries have also developed national Organic Action Plans to promote and support organic agriculture. As part of the EU funded ORGAP project (“European Action Plan of Organic Food and Farming - Development of criteria and procedures for the evaluation of the EU Action Plan for Organic Agriculture”) a toolbox to evaluate and monitor the implementation of national and European Action Plans has been developed. In order to communicate the results of this project as widely as possible, a practical manual for initiating and evaluating Organic Action Plans has been produced. This manual has been created to inspire the people, organisations and institutions involved, or with an interest, in the organic food and farming sector to engage in the initiation, review, revision and renewal of regional, national and European Organic Action Plans. The objectives of the manual are to provide: ‱ a tool for stakeholder involvement in future Action Plan development and implementation processes at EU, national and regional level ‱ a guide to the use of the Organic Action Plan Evaluation Toolbox (ORGAPET) developed through the project The manual summarises the key lessons learnt from more than 10 years experience of development, implementation and evaluation of Organic Action Plans throughout Europe. The Organic Action Plan Evaluation Toolbox (ORGAPET), which includes comprehensive information to support the Organic Action Plan development and evaluation process is included with the manual as a CD-ROM, and is also accessible on-line at www.orgap.org/orgapet. The ORGAP website www.orgap.org provides a further information on the project and the European and national organic action plans. Published by: Research Institute of Organic Agriculture (FiBL), Frick, Switzerland; IFOAM EU Group, Brussels Table of contents Foreword 1 1 Introduction 3 1.1 About this manual 3 1.2 Organic farming – origins, definition & principles 6 1.3 Development of organic food & farming in Europe 8 1.3.1 Organic food and farming regulation in Europe 10 1.3.2 Policy support for organic food and farming in Europe 11 2 Organic Action Plans – what are they about? 16 2.1 Why Organic Action Plans? 16 2.2 European Organic Action Plan 21 2.3 Overview of national and regional Organic Action Plans 23 3 Planning and implementing Organic Action Plans 28 3.1 Policy development 28 3.2 Defining organic sector development needs and potential 31 3.3 Defining policy goals and objectives 34 3.4 Involving stakeholders 40 3.4.1 The case for stakeholder involvement 40 3.4.2 Identifying relevant stakeholders 42 3.4.3 Participatory approaches for stakeholders involvement 44 3.5 Decision making: selecting, integrating and prioritising relevant measures 46 3.5.1 Deciding on policy instruments and action points 47 3.5.2 Priorities for action – allocating resources 50 3.6 Implementing Organic Action Plans 52 3.7 Including monitoring and evaluation of Organic Action Plans from outset 56 3.8 Managing communication 58 3.9 Development of Action Plans in countries that joined the EU in 2004 and later 59 4 Evaluating Organic Action Plans 61 4.1 Principles of evaluation 61 4.2 Conducting an evaluation 64 4.3 Evaluating Action Plan design and implementation 70 4.3.1 Evaluating programme design and implementation processes 70 4.3.2 Evaluating programme coherence 72 4.3.3 Evaluating stakeholder involvement 74 4.4 Evaluating Action Plan effects 78 4.4.1 Developing and using indicators for evaluation 78 4.5 Overall evaluation of Organic Action Plans – judging success 85 4.6 Evaluating Action Plans in countries that joined the EU in 2004 and later 89 5 Organic Action Plans – the Golden Rules 91 5.1 Key elements of Organic Action Plan development 91 5.2 The Golden rules for Organic Action Plan 93 References 96 Annex Detailed synopsis of ORGAPET 10
    • 

    corecore