51,251 research outputs found
Achieving change in primary care—causes of the evidence to practice gap : systematic reviews of reviews
Acknowledgements The Evidence to Practice Project (SPCR FR4 project number: 122) is funded by the National Institute of Health Research (NIHR) School for Primary Care Research (SPCR). KD is part-funded by the National Institute for Health Research (NIHR) Collaborations for Leadership in Applied Research and Care West Midlands and by a Knowledge Mobilisation Research Fellowship (KMRF-2014-03-002) from the NIHR. This paper presents independent research funded by the National Institute of Health Research (NIHR). The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. Funding This study is funded by the National Institute for Health Research (NIHR) School for Primary Care Research (SPCR).Peer reviewedPublisher PD
Randomised controlled trials of complex interventions and large-scale transformation of services
Complex interventions and large-scale transformations of services are necessary to meet the health-care challenges of the 21st century. However, the evaluation of these types of interventions is challenging and requires methodological development.
Innovations such as cluster randomised controlled trials, stepped-wedge designs, and non-randomised evaluations provide options to meet the needs of decision-makers. Adoption of theory and logic models can help clarify causal assumptions, and process evaluation can assist in understanding delivery in context. Issues of implementation must also be considered throughout intervention design and evaluation to ensure that results can be scaled for population benefit. Relevance requires evaluations conducted under real-world conditions, which in turn requires a pragmatic attitude to design. The increasing complexity of interventions and evaluations threatens the ability of researchers to meet the needs of decision-makers for rapid results. Improvements in efficiency are thus crucial, with electronic health records offering significant potential
Staff Nurse Ratings of Implementation Self-Efficacy for EBP (ISE4EBP) and Organizational EBP Readiness
There is limited research about nurses' confidence in implementing evidence into clinical practice. The purpose of this study was to further test, refine and strengthen the Implementation Self-Efficacy for EBP (ISE4EBP) scale and gain knowledge about staff nurses' perspectives of their confidence in EBP implementation in relation to the work environment as measured by the Context Assessment Index (CAI). We proposed, higher nurses confident in implementing evidence into practice would result in higher levels of implementing evidence-based practices (EBP). Bandura's theory of self-efficacy, which postulates that task-specific self-efficacy predicts performance guided the study. In a sample of 75 registered nurses, the overall average score for the ISE4EBP scale was 63%, indicating moderate confidence in implementation strategies. This study furthered the construct validity of the ISE4EBP scale by demonstrating associations between the ISE4EBP scores with the CAI.No embargoAcademic Major: Nursin
Recommended from our members
The utilisation of health research in policy-making: Concepts, examples, and methods of assessment
Chapter 1: Introduction and Background
• The importance of utilising health research in policy-making, and therefore the need to understand the mechanisms involved, is increasingly recognised. Recent reports calling for more resources to improve health in developing countries, and global pressures for accountability, draw greater attention to research-informed policy-making.
• For at least twenty years there has been recognition of the multiple meanings or models of research utilisation in policy-making. It has similarly been long recognised that a range of factors is involved in the interactions between health research and policy-makers.
• The emerging focus on Health Research Systems (HRS) has identified additional mechanisms through which greater utilisation of research could be achieved. Assessment of the role of health research in policy-making is best undertaken as part of a wider study that also includes the utilisation of health research by industry, medical practitioners, and the public.
Chapter 2: The Nature of Policy-Making, Types of Research and Utilisation Models
• Policy-making broadly interpreted includes national health policies made by government ministers and officials, policies made by local health service managers, and clinical guidelines from professional bodies. In this report, however, the main focus is on public policy-making rather than that conducted by professional bodies. The utilisation of health research in policy-making should eventually lead to desired outcomes, including health gains. Research can make a contribution in at least three phases of the policy-making process: agenda setting; policy formulation; and implementation. Descriptions of these processes, however, can over-estimate the degree of rationality in policy-making. Therefore, the analysis is informed by a review of the full range of policy-making models. These include rational and incrementalist models.
• Various categories of research are likely to be used differently in health policy-making. Applied research might be more readily useable by a policy system than basic research, but health policy-makers tend to relate more willingly to natural sciences than social sciences. When research is based on the priorities of potential users, and/or is research of proven quality, this increases the possibility that it will be translated into policies. There also appears to be a greater chance of research being used in clinical policies about delivering care to patients, than in national policies on the structures of the health service.
• Models of research utilisation in policy-making start with a link to rational or instrumental views of policy-making, and include descriptions of how commissioned research can help to find solutions to problems. Other models relate to an incrementalist view in which policy-making involves a series of small steps over a long period; research findings might gradually cause a shift in perceptions about an issue in a process of ‘enlightenment’. Interactive models of research utilisation stress the way in which policy-makers and researchers might develop links over a long period. Research can also be used symbolically to support decisions already taken.
Chapter 3: Examples from Previous Studies
• A study of health policy-making in two southern African countries illustrates how policy-making processes can be analysed. It addresses agenda setting, policy formulation and implementation. The methods used included documentary analysis and key informant interviews.
• Many previous studies of research utilisation can provide lessons for future assessments. Two broad approaches can be identified. Some studies start with pieces, or programmes, of research and examine their impact. Others consider policy on a particular topic and assess the role of research in the policy-making. There are advantages and drawbacks in each approach, and overlaps between them.
• To facilitate comparison, studies of research utilisation are best organised around a conceptual framework. Despite that, the influence of contextual factors in different settings makes it difficult to generalise.
• The two methods used most frequently, and usually together, come from the qualitative tradition: documentary analysis and in-depth interviews. Questionnaires, bibliometric analysis, insider knowledge and historical approaches have all been applied. A few recent studies have attempted to score or scale the level of utilisation.
• The examples suggest there is a greater level of utilisation and final outcomes in terms of health, health equity, and social and economic gain than is often assumed, whilst still showing much underutilisation. There is considerable variation in the degree of utilisation, both within and between studies.
Chapter 4: Key Issues in the Analysis of Research Utilisation in Policy-Making
• Increasing attention is focusing on the concept of interfaces between researchers and the users of research. This incorporates the idea that there are likely to be different values and interests between the two communities.
• In relation to utilisation, the prioritisation debate revolves around two key aspects: whether priorities are being set that will produce research that policy-makers and others will want to use, and whether priorities are being set that will engage the interests and commitment of the research community.
• Interactions across the interface between policy-makers and researchers are important in transferring research to policy-makers. This fits especially well with the interactive model of utilisation. Actions by individual researchers can be useful in generating interaction, but it is desirable to consider the role of the HRS in encouraging or facilitating interactions, networks and mechanisms at a system-wide level. The HRS could provide funding and organisational support for various items including: long-term research centres; research brokerage/translator mechanisms; the creation of official committees of policy-makers and researchers; and mechanisms for review and synthesis of research findings.
• There is increased recognition of the significance of policy-makers in their role as the receptors of research. In relation to the perspective of policy-makers there is a spectrum of key questions. These range from whether relevant research is available and effectively being brought to their attention, to whether they are able to absorb it and willing to use it. The HRS has a responsibility, especially in the early parts of the spectrum, but the wider health system also has a responsibility to create appropriate institutional mechanisms and ensure there are staff willing and able to incorporate relevant research.
• More attention should be given to the role of incentives, both for researchers to produce utilisable research, and for policy-makers, at the system or individual level, to use it. The assessment of utilisation becomes a key issue if rewards are to focus on relevance as well as research excellence.
• An appropriate model for assessing research utilisation in policy-making combines analysis of two issues: the role of receptors and the importance of actions at the interfaces. An emphasis on the role of the receptor is necessary because ultimately it is up to the policy-maker to make the decisions. Any assessment of the success of the HRS in relation to utilisation must accept that the wider political context is beyond the control of the HRS, but consider the activities of the HRS, within its given context, to enhance the utilisation of research by increasing the permeability of the interfaces.
Chapter 5: Assessment of Research Utilisation in Health Policy-Making
• The reasons for assessing the utilisation of research in policy-making include: advocacy, accountability, and increased understanding. For the World Health Organization there could be a role in conducting such assessments with the aim of providing evidence of the effective use of research resources. This could support advocacy for greater resources to be made available for health research. It is important that the purposes of any assessment are taken into account in planning the methods to be used.
• Previous studies demonstrated the difficulties of making generalisations about specific factors associated with high levels of utilisation. To address this in any cross-national WHO initiative involving a series of studies in a range of countries, it would be desirable to structure all the studies around a conceptual framework (such as the interfaces and receptor framework considered here) and base the studies in each country on common themes. These could include policies for the adoption of multi-drug therapy for treating leprosy, and for the equitable access to health services.
• Analysis of documents and semi-structured interviews would be appropriate methods in each study assessing the role of research in policy-making on a specific policy theme. Questionnaires could also have a role. These approaches would provide triangulation of methods and data-sources and should also provide material to help identify the relative importance, in relation to the level of utilisation recorded, of the HRS mechanisms described in the previous analysis. The types and sources of research used, and reasons for their use, should also be recorded and attempts made to correlate them with the previous priority setting approaches. It is expected that each study will produce its own narrative or story of what caused utilisation in the particular context, but the data gathered could also be applied to descriptive scales of the level research utilisation. The four scales could cover the consistency of policy with research findings, and the degree of influence of research on agenda setting, policy formulation, and implementation.
• The findings from the assessments in each participating country should be collated. For each policy theme or topic the analysis would compare two sets of data: the scales for level of research utilisation in each country, and the contextualised lists of the HRS activities and other mechanisms and networks thought to be important. Although the account here has focused on research impact on policy-making, the evaluations would be stronger as part of a wider analysis covering research utilisation and interactions with practitioners, industry and the public.
• Given appropriate and targeted topic and country selection, this approach is likely to meet the purpose of using structured methods to provide examples of effective research utilisation. The approach should contribute towards enhanced understanding of the issues and could provide the basis of an assessment tool which, if used widely in countries, could lead to greater utilisation of health research.Research Policy and Co-operation (RPC) Department of the World Health Organization, Geneva; UK Department of Health’s Policy Research Programme; Alliance for Health Policy and Systems Research from the governments of Norway and Sweden; World Bank and International Development Research Council of Canad
Recommended from our members
Evaluation of the NHS R & D implementation methods programme
Chapter 1: Background and introduction
• Concern with research implementation was a major factor behind the creation of the NHS R&D Programme in 1991. In 1994 an Advisory Group was established to identify research priorities in this field. The Implementation Methods Programme (IMP) flowed from this and its Commissioning Group funded 36 projects. Funding for the IMP was capped before the second round of commissioning. The Commissioning Group was disbanded and eventually responsibility for the programme passed to the National Co-ordinating Centre for NHS Service Delivery and Organisation R&D (NCCSDO) which, when most projects had finished, asked the Health Economics Research Group (HERG) to conduct an evaluation. This was intended to cover: the quality of outputs; lessons to be learnt about the communication strategy and the commissioning process; and the benefits or payback from the projects. As agreed, the evaluation also addresses the questions of whether there should be a synthesis of the findings from the IMP and any further assessment of payback.
Chapter 2: Methods
• We adopted a wide range of quantitative and qualitative methods in the evaluation. They included: documentary analysis; interviews with key actors; questionnaires to the funded lead researchers; questionnaires to potential users; and desk analysis.
Chapter 3: The outputs from the programme
• As in previous assessments of research programmes, we first examined the outputs in terms of knowledge production and various items related to capacity to conduct further research. Although there was a high response rate to the questionnaire to lead researchers (30/36), missing responses mean that the data given below are incomplete. In the case of publications, however, we also made some use of data previously gathered by the programme office.
• We attempted to identify publications that were in some way a specific product of IMP funding. About half (59) of the publications from the IMP projects are articles in peer reviewed journals. The journal used most frequently for publication, the BMJ, is also the one with the highest journal impact factor score of those publishing articles specifically from the programme. The recent publication datesof many articles reduces the value of citation analysis. Nevertheless, one article, Coulter et al, 1999, has already been cited on 53 occasions. Important publications, including No Magic Bullets (Oxman et al, 1995), are also associated with preliminary work undertaken for the IMP to assist priority setting.
• Fifteen projects, with grants of over £1.3 million, have been awarded to IMP researchers by other funders for follow-on studies connected in some way to the IMP. We also collected details about some non-IMP researchers who are building on the IMP projects.
• Research training provided in at least nine of the funded IMP projects is associated with higher/research degrees, including three MDs and four PhDs, that have been awarded or are being completed.
Chapter 4: Disseminating and using the research findings
• Limited thought had been given by the Implementation Methods Programme to dissemination strategies, but many of the individual researchers were active here. In response to the questionnaires, lead researchers reported making 92 presentations to academic audiences and 104 to practitioner/service groups. Some lead researchers showed that their effective dissemination led to utilisation of the findings.
• The Commissioning Group gave some thought to the likely use that could be made of individual research projects, but there was limited systematic analysis of how the findings as a whole would be taken forward. Achieving impact is difficult in this complex field and less than a third of lead researchers claimed to have done so, but about half thought impact could be expected. Based mainly on reports from lead researchers, we give a brief account of how the findings from six projects are being utilised.
• We sent electronic questionnaires to groups of potential users of selected projects but this produced a very low response rate. Our postal survey to Heads of Midwifery/researchers in perinatal care produced a higher response of 44%. Amongst those who did respond, there is quite a high level of knowledge about some of the programme’s projects and some level of existing and potential utilisation. We suggest, however, that in some cases there are difficulties in identifying how far the respondent’s focus is on the findings from the original
research projects, and it how far it is on the impact of the IMP study that is about ways of influencing the uptake of such findings. Comments from several respondents showed strong support for the cutting edge nature of some of the research. Others, however, also indicated why findings might not be utilised by some practitioners. Several respondents advocated greater dissemination of the IMP.
Chapter 5: Comparing applications with outputs
• We attempted to compare the scores given to project applications with those given to projects based on their outputs. This exercise faced various problems. The final reports from all completed projects had in theory already been reviewed and given scores for their quality and relevance. In practice, not all final reports received scores. We added a refinement by giving further scores that incorporated the additional information we gathered about both publications and any uptake of the research findings.
• Various limitations meant that we conducted this analysis on just 19 of the 36 projects. Nevertheless, the wide range of scores given to the outputs from projects indicates that some were much more successful than others. Our rather limited evidence suggests that there is some correlation between the scores for applications and those for outputs but it is small, which could be related to the difficulties encountered during commissioning.
Chapter 6: Lessons learnt about the commissioning process
• Those who established the IMP were aware that it was a different type of research field from those previously addressed within the NHS R&D Programme, but one regarded nonetheless as important. Within the NHS R&D Programme at that time a standard clinical RCT approach was strongly favoured. There was also, as ever, a need for quick results.
• In developing an understanding of implementation the Advisory Group (AG) conducted cutting edge analysis, consulted widely and drew on a wide range of disciplines. Our interviewees generally took the view that the AG worked well in setting priorities and went as far as it could at the time, especially given the time constraints.
• Based on our field work and analysis we identified a series of lessons that might
inform future exercises. More attention was required to ensure that all relevant background disciplines were adequately taken into account in setting priorities and commissioning research. Some of these processes needed to be given more time than was available. Consultation needed to be organised in a sufficiently selective way to be of maximum benefit in such a complex area. A time-limited programme was not the most appropriate way to cover a field such as this.
• In relation to the commissioning of the projects, we identified issues about the composition of commissioning groups and how people from different backgrounds (researchers, practitioners, managers and patient representatives) should best be involved.
• In this new field the Commissioning Group (CG) had to work closely with applicants to develop some of the research applications. This raised issues about how, and when, this process should be handled.
• Despite its own rationale, and for a variety of reasons, including the disbanding of the CG, the Implementation Methods Programme never developed an implementation or communication strategy for its own findings.
• The general conclusion of those who had been involved with the IMP was that it worked as well as it could at the time, and that various important projects were commissioned. But it was only a start.
Chapter 7: Should a synthesis of the findings from the programme and further
payback analysis be undertaken?
• From interviews and questionnaires we identified widespread, but not total, agreement that there should be some type of synthesis of the findings from the IMP. There is more debate about the form such a synthesis should take. There is some support for a more limited type of stock taking, but also wider backing for the inclusion of many different elements. These could include: a conceptual map of the field of research implementation; an exploration of how the findings from the IMP fit into the context of research implementation today; and an assessment of how far work is still needed in those areas where no projects were funded. One possible suggestion that might incorporate much of this thinking is for the establishment of a group or commission of leading researchers in the field. Their investigation could incorporate all these elements and attempt to show how the issues could be advanced.
• We suggest that further work on assessing the payback from the IMP is probably not worthwhile unless it is undertaken as part of a wide-ranging synthesis.
Chapter 8: Conclusions, lessons and recommendations
• We conclude that the IMP was seen by many of those involved as a new and exciting field. Looking back, they were generally positive about what was started through the IMP. It commissioned a series of projects that produced some important, rigorous, and cutting edge research, at least some of which is making an impact. But this is a complex area in which traditional clinical research, health services research and the social sciences all have a role to play. A unique set of difficulties, as well as opportunities, was faced by those responsible for taking the programme forward. The intellectual challenges of constructing a programme to cover such a vast area with diverse and sometimes conflicting conceptual and methodological perspectives, were compounded by practical problems. These included the capping of the programme’s funding and the premature winding up of the Commissioning Group. As a result, this complex programme, which arguably needed better support than its more clinically orientated predecessors, did not receive it at some stages. Those involved in the programme had a considerable task – the difficulties of which were not completely appreciated at the time. They are clearer in retrospect and feed into the lessons and recommendations presented here, but it is recognised that a programme such as the SDO is already adopting some of the steps.
• In relation to research commissioning and communication strategies for research programmes in general, we suggest it could be helpful if protocols were drawn up to cover certain potential difficulties. These include the remit and role of the various stakeholders represented on commissioning groups and the extent to which commissioning groups should be expected to support applicants with their proposals. Perhaps the key general lesson from this evaluation is the need for research programmes to have a proper communication strategy. This should target dissemination at relevant audiences and stress the desirability for contact to be made with potential users as early as possible in the process of devising a project.
• Our other recommendations are more specifically relevant when the SDO Programme is considering an area such as implementation methods research. It
would be desirable for more time to be made available for preparatory work than was allowed for the IMP and also scope provided for the programme to be able to re-visit issues and learn from early results. It is difficult to incorporate all the analysis that is required if a programme is operating in a time-limited way.
• Our conclusion that research implementation is a crucial area for the NHS R&D Programme leads to the recommendation that more R&D activity is needed in this field in order to assist delivery of some key NHS agenda items. As a preliminary step, there is certainly scope for a type of stock taking of the findings from the IMP. On balance there seems also to be an argument for conducting a synthesis of work in the implementation field that goes beyond a mere collation of findings from the specific projects funded. If undertaken, it should fundamentally examine the current NHS needs for research on implementation and how they could be addressed in the light of the findings from the IMP and elsewhere.
• Finally, we recommend that more attention should be given to the timing of evaluations such as this and that a phased approach should be adopted. Furthermore, researchers should be informed at the outset of their project about the likely requirements that might be placed upon them in terms of responding to requests for information by those conducting an evaluation.National Co-ordinating Centre for NHS Service Delivery and Organisation R&D (NCCSDO
Lessons from the evaluation of the UK's NHS R&D Implementation Methods Programme
Background: Concern about the effective use of research was a major factor behind the creation
of the NHS R&D Programme in 1991. In 1994, an advisory group was established to identify
research priorities in research implementation. The Implementation Methods Programme (IMP)
flowed from this, and its commissioning group funded 36 projects. In 2000 responsibility for the
programme passed to the National Co-ordinating Centre for NHS Service Delivery and
Organisation R&D, which asked the Health Economics Research Group (HERG), Brunel University,
to conduct an evaluation in 2002. By then most projects had been completed. This evaluation was
intended to cover: the quality of outputs, lessons to be learnt about the communication strategy
and the commissioning process, and the benefits from the projects.
Methods: We adopted a wide range of quantitative and qualitative methods. They included:
documentary analysis, interviews with key actors, questionnaires to the funded lead researchers,
questionnaires to potential users, and desk analysis.
Results: Quantitative assessment of outputs and dissemination revealed that the IMP funded useful
research projects, some of which had considerable impact against the various categories in the
HERG payback model, such as publications, further research, research training, impact on health
policy, and clinical practice.
Qualitative findings from interviews with advisory and commissioning group members indicated
that when the IMP was established, implementation research was a relatively unexplored field. This
was reflected in the understanding brought to their roles by members of the advisory and
commissioning groups, in the way priorities for research were chosen and developed, and in how
the research projects were commissioned. The ideological and methodological debates associated
with these decisions have continued among those working in this field. The need for an effective
communication strategy for the programme as a whole was particularly important. However, such
a strategy was never developed, making it difficult to establish the general influence of the IMP as a
programme.
Conclusion: Our findings about the impact of the work funded, and the difficulties faced by those
developing the IMP, have implications for the development of strategic programmes of research in
general, as well as for the development of more effective research in this field
Mixed-method study of a conceptual model of evidence-based intervention sustainment across multiple public-sector service settings.
BackgroundThis study examines sustainment of an EBI implemented in 11 United States service systems across two states, and delivered in 87 counties. The aims are to 1) determine the impact of state and county policies and contracting on EBI provision and sustainment; 2) investigate the role of public, private, and academic relationships and collaboration in long-term EBI sustainment; 3) assess organizational and provider factors that affect EBI reach/penetration, fidelity, and organizational sustainment climate; and 4) integrate findings through a collaborative process involving the investigative team, consultants, and system and community-based organization (CBO) stakeholders in order to further develop and refine a conceptual model of sustainment to guide future research and provide a resource for service systems to prepare for sustainment as the ultimate goal of the implementation process.MethodsA mixed-method prospective and retrospective design will be used. Semi-structured individual and group interviews will be used to collect information regarding influences on EBI sustainment including policies, attitudes, and practices; organizational factors and external policies affecting model implementation; involvement of or collaboration with other stakeholders; and outer- and inner-contextual supports that facilitate ongoing EBI sustainment. Document review (e.g., legislation, executive orders, regulations, monitoring data, annual reports, agendas and meeting minutes) will be used to examine the roles of state, county, and local policies in EBI sustainment. Quantitative measures will be collected via administrative data and web surveys to assess EBI reach/penetration, staff turnover, EBI model fidelity, organizational culture and climate, work attitudes, implementation leadership, sustainment climate, attitudes toward EBIs, program sustainment, and level of institutionalization. Hierarchical linear modeling will be used for quantitative analyses. Qualitative analyses will be tailored to each of the qualitative methods (e.g., document review, interviews). Qualitative and quantitative approaches will be integrated through an inclusive process that values stakeholder perspectives.DiscussionThe study of sustainment is critical to capitalizing on and benefiting from the time and fiscal investments in EBI implementation. Sustainment is also critical to realizing broad public health impact of EBI implementation. The present study takes a comprehensive mixed-method approach to understanding sustainment and refining a conceptual model of sustainment
Improving inpatient postnatal services: midwives views and perspectives of engagement in a quality improvement initiative
Background: despite major policy initiatives in the United Kingdom to enhance women's experiences of maternity care, improving in-patient postnatal care remains a low priority, although it is an aspect of care consistently rated as poor by women. As part of a systems and process approach to improving care at one maternity unit in the South of England, the views and perspectives of midwives responsible for implementing change were sought.
Methods: a Continuous Quality Improvement (CQI) approach was adopted to support a systems and process change to in-patient care and care on transfer home in a large district general hospital with around 6000 births a year. The CQI approach included an initial assessment to identify where revisions to routine systems and processes were required, developing, implementing and evaluating revisions to the content and documentation of care in hospital and on transfer home, and training workshops for midwives and other maternity staff responsible for implementing changes. To assess midwifery views of the quality improvement process and their engagement with this, questionnaires were sent to those who had participated at the outset.
Results: questionnaires were received from 68 (46%) of the estimated 149 midwives eligible to complete the questionnaire. All midwives were aware of the revisions introduced, and two-thirds felt these were more appropriate to meet the women's physical and emotional health, information and support needs. Some midwives considered that the introduction of new maternal postnatal records increased their workload, mainly as a consequence of colleagues not completing documentation as required.
Conclusions: this was the first UK study to undertake a review of in-patient postnatal services. Involvement of midwives at the outset was essential to the success of the initiative. Midwives play a lead role in the planning and organisation of in-patient postnatal care and it was important to obtain their feedback on whether revisions were pragmatic and achieved anticipated improvements in care quality. Their initial involvement ensured priority areas for change were identified and implemented. Their subsequent feedback highlighted further important areas to address as part of CQI to ensure best quality care continues to be implemented. Our findings could support other maternity service organisations to optimise in-patient postnatal services
- …