376 research outputs found
Global IDEA
Five members of the Global IDEA Scientific Advisory Committee respond to Dr. Moore and colleagues: Global IDEA Scientific Advisory Committee. Health and economic benefits of an accelerated program of research to combat global infectious diseases
Building health research systems to achieve better health
Health research systems can link knowledge generation with practical concerns to improve health
and health equity. Interest in health research, and in how health research systems should best be
organised, is moving up the agenda of bodies such as the World Health Organisation. Pioneering
health research systems, for example those in Canada and the UK, show that progress is possible.
However, radical steps are required to achieve this. Such steps should be based on evidence not
anecdotes.
Health Research Policy and Systems (HARPS) provides a vehicle for the publication of research, and
informed opinion, on a range of topics related to the organisation of health research systems and
the enormous benefits that can be achieved. Following the Mexico ministerial summit on health
research, WHO has been identifying ways in which it could itself improve the use of research
evidence. The results from this activity are soon to be published as a series of articles in HARPS.
This editorial provides an account of some of these recent key developments in health research
systems but places them in the context of a distinguished tradition of debate about the role of
science in society. It also identifies some of the main issues on which 'research on health research'
has already been conducted and published, in some cases in HARPS. Finding and retaining adequate
financial and human resources to conduct health research is a major problem, especially in low and
middle income countries where the need is often greatest. Research ethics and agenda-setting that
responds to the demands of the public are issues of growing concern. Innovative and collaborative
ways are being found to organise the conduct and utilisation of research so as to inform policy, and
improve health and health equity. This is crucial, not least to achieve the health-related Millennium
Development Goals. But much more progress is needed. The editorial ends by listing a wide range
of topics related to the above priorities on which we hope to feature further articles in HARPS and
thus contribute to an informed debate on how best to achieve such progress
Recommended from our members
Evaluation of the NHS R & D implementation methods programme
Chapter 1: Background and introduction
• Concern with research implementation was a major factor behind the creation of the NHS R&D Programme in 1991. In 1994 an Advisory Group was established to identify research priorities in this field. The Implementation Methods Programme (IMP) flowed from this and its Commissioning Group funded 36 projects. Funding for the IMP was capped before the second round of commissioning. The Commissioning Group was disbanded and eventually responsibility for the programme passed to the National Co-ordinating Centre for NHS Service Delivery and Organisation R&D (NCCSDO) which, when most projects had finished, asked the Health Economics Research Group (HERG) to conduct an evaluation. This was intended to cover: the quality of outputs; lessons to be learnt about the communication strategy and the commissioning process; and the benefits or payback from the projects. As agreed, the evaluation also addresses the questions of whether there should be a synthesis of the findings from the IMP and any further assessment of payback.
Chapter 2: Methods
• We adopted a wide range of quantitative and qualitative methods in the evaluation. They included: documentary analysis; interviews with key actors; questionnaires to the funded lead researchers; questionnaires to potential users; and desk analysis.
Chapter 3: The outputs from the programme
• As in previous assessments of research programmes, we first examined the outputs in terms of knowledge production and various items related to capacity to conduct further research. Although there was a high response rate to the questionnaire to lead researchers (30/36), missing responses mean that the data given below are incomplete. In the case of publications, however, we also made some use of data previously gathered by the programme office.
• We attempted to identify publications that were in some way a specific product of IMP funding. About half (59) of the publications from the IMP projects are articles in peer reviewed journals. The journal used most frequently for publication, the BMJ, is also the one with the highest journal impact factor score of those publishing articles specifically from the programme. The recent publication datesof many articles reduces the value of citation analysis. Nevertheless, one article, Coulter et al, 1999, has already been cited on 53 occasions. Important publications, including No Magic Bullets (Oxman et al, 1995), are also associated with preliminary work undertaken for the IMP to assist priority setting.
• Fifteen projects, with grants of over £1.3 million, have been awarded to IMP researchers by other funders for follow-on studies connected in some way to the IMP. We also collected details about some non-IMP researchers who are building on the IMP projects.
• Research training provided in at least nine of the funded IMP projects is associated with higher/research degrees, including three MDs and four PhDs, that have been awarded or are being completed.
Chapter 4: Disseminating and using the research findings
• Limited thought had been given by the Implementation Methods Programme to dissemination strategies, but many of the individual researchers were active here. In response to the questionnaires, lead researchers reported making 92 presentations to academic audiences and 104 to practitioner/service groups. Some lead researchers showed that their effective dissemination led to utilisation of the findings.
• The Commissioning Group gave some thought to the likely use that could be made of individual research projects, but there was limited systematic analysis of how the findings as a whole would be taken forward. Achieving impact is difficult in this complex field and less than a third of lead researchers claimed to have done so, but about half thought impact could be expected. Based mainly on reports from lead researchers, we give a brief account of how the findings from six projects are being utilised.
• We sent electronic questionnaires to groups of potential users of selected projects but this produced a very low response rate. Our postal survey to Heads of Midwifery/researchers in perinatal care produced a higher response of 44%. Amongst those who did respond, there is quite a high level of knowledge about some of the programme’s projects and some level of existing and potential utilisation. We suggest, however, that in some cases there are difficulties in identifying how far the respondent’s focus is on the findings from the original
research projects, and it how far it is on the impact of the IMP study that is about ways of influencing the uptake of such findings. Comments from several respondents showed strong support for the cutting edge nature of some of the research. Others, however, also indicated why findings might not be utilised by some practitioners. Several respondents advocated greater dissemination of the IMP.
Chapter 5: Comparing applications with outputs
• We attempted to compare the scores given to project applications with those given to projects based on their outputs. This exercise faced various problems. The final reports from all completed projects had in theory already been reviewed and given scores for their quality and relevance. In practice, not all final reports received scores. We added a refinement by giving further scores that incorporated the additional information we gathered about both publications and any uptake of the research findings.
• Various limitations meant that we conducted this analysis on just 19 of the 36 projects. Nevertheless, the wide range of scores given to the outputs from projects indicates that some were much more successful than others. Our rather limited evidence suggests that there is some correlation between the scores for applications and those for outputs but it is small, which could be related to the difficulties encountered during commissioning.
Chapter 6: Lessons learnt about the commissioning process
• Those who established the IMP were aware that it was a different type of research field from those previously addressed within the NHS R&D Programme, but one regarded nonetheless as important. Within the NHS R&D Programme at that time a standard clinical RCT approach was strongly favoured. There was also, as ever, a need for quick results.
• In developing an understanding of implementation the Advisory Group (AG) conducted cutting edge analysis, consulted widely and drew on a wide range of disciplines. Our interviewees generally took the view that the AG worked well in setting priorities and went as far as it could at the time, especially given the time constraints.
• Based on our field work and analysis we identified a series of lessons that might
inform future exercises. More attention was required to ensure that all relevant background disciplines were adequately taken into account in setting priorities and commissioning research. Some of these processes needed to be given more time than was available. Consultation needed to be organised in a sufficiently selective way to be of maximum benefit in such a complex area. A time-limited programme was not the most appropriate way to cover a field such as this.
• In relation to the commissioning of the projects, we identified issues about the composition of commissioning groups and how people from different backgrounds (researchers, practitioners, managers and patient representatives) should best be involved.
• In this new field the Commissioning Group (CG) had to work closely with applicants to develop some of the research applications. This raised issues about how, and when, this process should be handled.
• Despite its own rationale, and for a variety of reasons, including the disbanding of the CG, the Implementation Methods Programme never developed an implementation or communication strategy for its own findings.
• The general conclusion of those who had been involved with the IMP was that it worked as well as it could at the time, and that various important projects were commissioned. But it was only a start.
Chapter 7: Should a synthesis of the findings from the programme and further
payback analysis be undertaken?
• From interviews and questionnaires we identified widespread, but not total, agreement that there should be some type of synthesis of the findings from the IMP. There is more debate about the form such a synthesis should take. There is some support for a more limited type of stock taking, but also wider backing for the inclusion of many different elements. These could include: a conceptual map of the field of research implementation; an exploration of how the findings from the IMP fit into the context of research implementation today; and an assessment of how far work is still needed in those areas where no projects were funded. One possible suggestion that might incorporate much of this thinking is for the establishment of a group or commission of leading researchers in the field. Their investigation could incorporate all these elements and attempt to show how the issues could be advanced.
• We suggest that further work on assessing the payback from the IMP is probably not worthwhile unless it is undertaken as part of a wide-ranging synthesis.
Chapter 8: Conclusions, lessons and recommendations
• We conclude that the IMP was seen by many of those involved as a new and exciting field. Looking back, they were generally positive about what was started through the IMP. It commissioned a series of projects that produced some important, rigorous, and cutting edge research, at least some of which is making an impact. But this is a complex area in which traditional clinical research, health services research and the social sciences all have a role to play. A unique set of difficulties, as well as opportunities, was faced by those responsible for taking the programme forward. The intellectual challenges of constructing a programme to cover such a vast area with diverse and sometimes conflicting conceptual and methodological perspectives, were compounded by practical problems. These included the capping of the programme’s funding and the premature winding up of the Commissioning Group. As a result, this complex programme, which arguably needed better support than its more clinically orientated predecessors, did not receive it at some stages. Those involved in the programme had a considerable task – the difficulties of which were not completely appreciated at the time. They are clearer in retrospect and feed into the lessons and recommendations presented here, but it is recognised that a programme such as the SDO is already adopting some of the steps.
• In relation to research commissioning and communication strategies for research programmes in general, we suggest it could be helpful if protocols were drawn up to cover certain potential difficulties. These include the remit and role of the various stakeholders represented on commissioning groups and the extent to which commissioning groups should be expected to support applicants with their proposals. Perhaps the key general lesson from this evaluation is the need for research programmes to have a proper communication strategy. This should target dissemination at relevant audiences and stress the desirability for contact to be made with potential users as early as possible in the process of devising a project.
• Our other recommendations are more specifically relevant when the SDO Programme is considering an area such as implementation methods research. It
would be desirable for more time to be made available for preparatory work than was allowed for the IMP and also scope provided for the programme to be able to re-visit issues and learn from early results. It is difficult to incorporate all the analysis that is required if a programme is operating in a time-limited way.
• Our conclusion that research implementation is a crucial area for the NHS R&D Programme leads to the recommendation that more R&D activity is needed in this field in order to assist delivery of some key NHS agenda items. As a preliminary step, there is certainly scope for a type of stock taking of the findings from the IMP. On balance there seems also to be an argument for conducting a synthesis of work in the implementation field that goes beyond a mere collation of findings from the specific projects funded. If undertaken, it should fundamentally examine the current NHS needs for research on implementation and how they could be addressed in the light of the findings from the IMP and elsewhere.
• Finally, we recommend that more attention should be given to the timing of evaluations such as this and that a phased approach should be adopted. Furthermore, researchers should be informed at the outset of their project about the likely requirements that might be placed upon them in terms of responding to requests for information by those conducting an evaluation.National Co-ordinating Centre for NHS Service Delivery and Organisation R&D (NCCSDO
Recommended from our members
Assessing the payback from health R & D: From ad hoc studies to regular monitoring
Chapter 1 : Introduction
• The increasing demands for the benefits of payback from publicly funded R&D to be assessed are based partly on the need to justify or account for expenditure on R&D, and partly on the desire for information to assist resource allocation and the better management of R&D funds. The former consideration is particularly strong in relation to the R&D expenditure that comes out of the wider NHS budget.
• In this report a range of categories of payback will be identified along with a variety of methods for assessing them.
• The aim of the report is to make recommendations as to how the outcomes from health research might best be monitored on a regular basis. The specific context of the report is the NHS R&D Programme but many of the issues will be relevant for a wide range of funders of health R&D.
• The introduction sets out not only a plan of the report but also suggests that readers familiar with the general arguments and existing literature may choose to jump to Chapter 6.
Chapter 2 : Review of Existing Approaches to Assessing the Payback from Research
• Existing work describes various approaches to valuing research. Some are ex ante and attempt to predict the outcomes of research being considered, others are ex post or retrospective.
• The five categories of benefit or payback from health R&D that have been identified involve contributions: to knowledge; to research capacity and future research; to improved information for decision making; to the efficiency, efficacy and equity of health care services; and to the nation’s economic performance. These are shown in Table 1 of the report
• The process by which R&D generates final outcomes can be modelled as a sequence. This includes primary outputs such as publications; secondary outputs in the form of policy or administrative decisions; and final outcomes which comprise the health and economic benefits. Feedback loops are also introduced and mitigate the limitations of a linear approach.
• Qualitative and quantitative approaches can be used but there are immense problems with time lags and attributing outcomes, and sometimes even outputs, to specific items of research funding.
• Four common methods of measuring payback can be used. Expert review, by peers or, sometimes, users is the traditional way of assessing the quality of research. Bibliometric techniques can involve not only counting publications but also using datasets such as the Science Citation Index and Wellcome’s Research Outputs Database (ROD). The various methods of economic analysis of payback are difficult to undertake given the costs and problems of acquiring relevant information and estimating benefits. Social science methods include case studies, which can provide useful information but are resource intensive, and questionnaires to researchers and potential research users.
Chapter 3 : Characteristics of a Routine Monitoring System
• In moving from ad hoc or research studies of payback towards a more regular monitoring it is noted that whereas there has always been a tradition of evaluation of research, in the public services in general there is now a greater emphasis on audit and performance measurement and indicators. A review of these various systems suggests we should be looking to develop a system of outcomes monitoring that incorporates performance indicators (PIs) and measurement rather than an audit system that is trying to monitor activities against predetermined targets.
• Standard characteristics of performance measurement systems do not necessarily apply to research where, for example, there are non-standard outputs. Difficulties have arisen in the USA in attempting to apply the Government Performance and Results Act to research funding agencies. It is shown that because the findings of basic research, in particular, enter a knowledge pool in which people and ideas interact, it is difficult to use a PIs’ approach to track eventual outcomes. However, for some types of health research it has proved more feasible to trace the flow between research outputs and outcomes.
• An outcomes monitoring system could be useful if it met the following criteria: relevant to, with as comprehensive coverage as possible of, the funders objectives; relevant to the funder’s decision making processes; encourages accurate compliance; minimises unintended consequences; and has acceptable costs.
Chapter 4 : Differences Between Research Types
• The range of differences between types of research can be relevant for the design of a routine monitoring system. The OECD distinguishes between basic research, applied research and experimental development. Most DH/NHS research is applied. There might be more of a tradition of publication of findings in applied research in health than in other fields. Nevertheless, the publication and incentives patterns operating in basic research mean that it would be inappropriate to use bibliometric indicators in a simple way across all fields even in health research.
• Despite having some differences from health research in publication patterns and in the detailed categories of payback, the broad approach proposed in Chapter 6 could be applied to social care research.
• Research that is commissioned, especially by the government, has some of the minimum conditions built into it that are associated with outcomes being generated, in particular because the funder has identified that a contribution in this area will be valuable.
Chapter 5 : What Units of Research?
• The term programme has various meanings including being used to describe a collection of projects on a common theme and to describe a block of funding for a research unit.
• Three main streams or modes of funding can be identified: projects, which are administratively grouped into programmes including a responsive programme; institutions/centres/units; individual researchers. These 3 streams are displayed in Figure 1. It is probable that the regular data-gathering for a monitoring system would operate at the basic level of each stream or mode.
• Previous work demonstrates that the full range of benefits can sometimes be applied at the level of projects, either in the responsive mode or in programmes, through the use of questionnaires to researchers. Expert and user review and user surveys have also been applied.
• Institutions and centres increasingly have experience not only of traditional periodic expert review but also of producing annual reports, although there are debates about what dimensions to include in such reviews and reports.
• Individuals in receipt of research development awards have completed questionnaires during and after the awards. These concentrate on the development of research capacity but can go wider.
Chapter 6 : A Possible Comprehensive Outcomes Monitoring System
• The proposed system is intended for DH/NHS to monitor the outcomes from its R&D in order to justify the R&D expenditure and assist with managing the portfolio. More detailed information is required for the latter purpose.
• We propose a multidimensional approach be adopted to cover all the dimensions of payback and that information be gathered from three sets of sources and Table 3 shows which methods would cover which output/outcome categories.
• Firstly, possibly annually, a questionnaire (possibly electronic) covering most payback categories should gather data from the basic level of each funding stream ie. from lead researchers of projects, from research institutions/centres, and from individual award holders.
• Secondly, supplementary information should be gathered from external databases (including the citation indices and Wellcome’s ROD).
• Thirdly, a range of approaches ie. user surveys, reviews by experts and peers, case studies including economic evaluations, and analysis of sources used in policy documents such as NICE guidelines, would be undertaken on a sample basis. They would provide not only supplementary information but, as with the external databases, would also verify the data collected directly from researchers.
• These proposals can be evaluated against the criteria set out in Chapter 3:
• The system is relevant to DH’s objectives of generating payback in a range of categories.
• Various problems have to be overcome before the system could be fully decision relevant. Firstly it might be necessary to ask researchers to apportion the contribution made to specific outputs from various funding streams. Second, to be decision relevant the information would have to be analysed and presented in a manner consistent with funders’ decision making processes. This would involve a) showing how for each outcome and output, for example publications, data from one project or stream could be compared with those from another and b) demonstrating how different outputs and outcomes could be aggregated.
• The questions of accuracy of data, minimisation of unintended consequences and the acceptability of the net costs are also addressed.
Chapter 7 : Research and Monitoring
• Whilst this report is primarily concerned with moving from ad hoc studies towards a routine monitoring system there are issues that need further research.
• Before embarking on full implementation the feasibility needs to be tested of items such as on-line recording of data and asking researchers to attribute proportions of research outputs to separate funding agencies.
• Once the system is implemented the value of some items can be better assessed, for example the additional value provided by self reporting of publications beyond that gained from relying on external databases.
• The data provided by the system would provide opportunities for further payback research on, for example, links between publications and other categories of payback.
• Some items such as network analysis could potentially be added to the monitoring system after further examination of them.
• Finally the benefit from the monitoring system itself should be assessed.Department of Health; Wellcome Trus
The journals of importance to UK clinicians: A questionnaire survey of surgeons
Background: Peer-reviewed journals are seen as a major vehicle in the transmission of research
findings to clinicians. Perspectives on the importance of individual journals vary and the use of
impact factors to assess research is criticised. Other surveys of clinicians suggest a few key journals
within a specialty, and sub-specialties, are widely read. Journals with high impact factors are not
always widely read or perceived as important. In order to determine whether UK surgeons
consider peer-reviewed journals to be important information sources and which journals they read
and consider important to inform their clinical practice, we conducted a postal questionnaire
survey and then compared the findings with those from a survey of US surgeons.
Methods: A questionnaire survey sent to 2,660 UK surgeons asked which information sources
they considered to be important and which peer-reviewed journals they read, and perceived as
important, to inform their clinical practice. Comparisons were made with numbers of UK NHSfunded
surgery publications, journal impact factors and other similar surveys.
Results: Peer-reviewed journals were considered to be the second most important information
source for UK surgeons. A mode of four journals read was found with academics reading more
than non-academics. Two journals, the BMJ and the Annals of the Royal College of Surgeons of England,
are prominent across all sub-specialties and others within sub-specialties. The British Journal of
Surgery plays a key role within three sub-specialties. UK journals are generally preferred and
readership patterns are influenced by membership journals. Some of the journals viewed by
surgeons as being most important, for example the Annals of the Royal College of Surgeons of England,
do not have high impact factors.
Conclusion: Combining the findings from this study with comparable studies highlights the
importance of national journals and of membership journals. Our study also illustrates the
complexity of the link between the impact factors of journals and the importance of the journals
to clinicians. This analysis potentially provides an additional basis on which to assess the role of
different journals, and the published output from research
Proposed methods for reviewing the outcomes of health research: the impact of funding by the UK's Arthritis Research Campaign
Background: External and internal factors are increasingly encouraging research funding bodies
to demonstrate the outcomes of their research. Traditional methods of assessing research are still
important, but can be merged into broader multi-dimensional categorisations of research benefits.
The onus has hitherto been on public sector funding bodies, but in the UK the role of medical
charities in funding research is particularly important and the Arthritis Research Campaign, the
leading medical charity in its field in the UK, commissioned a study to identify the outcomes from
research that it funds. This article describes the methods to be used.
Methods: A case study approach will enable narratives to be told, illuminating how research
funded in the early 1990s was (or was not) translated into practice. Each study will be organised
using a common structure, which, with careful selection of cases, should enable cross-case analysis
to illustrate the strengths of different modes and categories of research. Three main
interdependent methods will be used: documentary and literature review; semi-structured
interviews; and bibliometric analysis. The evaluative framework for organising the studies was
previously used for assessing the benefits from health services research. Here, it has been
specifically amended for a medical charity that funds a wide range of research and is concerned to
develop the careers of researchers. It was further refined in three pilot studies. The framework has
two main elements. First, a multi-dimensional categorisation of benefits going from the knowledge
produced in peer reviewed journal articles through to the health and potential economic gain. The
second element is a logic model, which, with various stages, should provide a way of organising the
studies. The stock of knowledge is important: much research, especially basic, will feed into it and
influence further research rather than directly lead to health gains. The cross-case analysis will look
for factors associated with outcomes.
Conclusions: The pilots confirmed the applicability of the methods for a full study which should
assist the Arthritis Research Campaign to demonstrate the outcomes from its funding, and provide
it with evidence to inform its own policies
Recommended from our members
Research impact evaluation, a wider context: Findings from a research impact pilot
In the face of increasing pressure to demonstrate the socio-economic impact of funded research, whether it is funded directly by research councils or indirectly by governmental research block grants, institutions have to tackle the complexity of understanding, tracking, collecting, and analysing the impact of all their research activities. This paper attempts to encapsulate the wider context of research impact by delineating a broad definition of what might be classified as impact. It also suggests a number of different dimensions that can help in the development of a systematic research impact assessment
framework. The paper then proceeds to indicate how boundaries and criteria around the definition of impact and these dimensions can be used to refine the impact assessment framework in order to focus on the objectives of the assessor. A pilot project, run at Brunel University, was used to test the validity of the approach and
possible consequences. A tool specifically developed for the pilot, the Brunel Research
Impact Device for Evaluation (BRIDE), is used for the analysis of research impact collected during the pilot. The paper reports on the findings of the analysis produced by BRIDE and confirms how a number of areas might be greatly affected by the boundaries set on definition and dimensions of research impact. The pilot project shows that useful information on impacts can be generated and it also provides a way to identify areas of work from each unit of assessment for which it would be worth developing narrative case studies. The pilot project has illustrated that it is feasible to make progress in terms of assessing impact, but that there are many difficulties to be addressed before impact assessment can be incorporated into a system of assessing the impact from the university sector as a whole. The paper concludes with an institutional perspective of the value of the approach and highlights possible applications. It also confirms the intention to expand the pilot and introduce new lines of investigation
Recommended from our members
Project Retrosight. Understanding the returns from cardiovascular and stroke research: Methodology Report
Copyright @ 2011 RAND Europe. All rights reserved. The full text article is available via the link below.This project explores the impacts arising from cardiovascular and stroke research funded 15-20 years ago and attempts to draw out aspects of the research, researcher or environment that are associated with high or low impact. The project is a case study-based review of 29 cardiovascular and stroke research grants, funded in Australia, Canada and UK between 1989 and 1993. The case studies focused on the individual grants but considered the development of the investigators and ideas involved in the research projects from initiation to the present day. Grants were selected through a stratified random selection approach that aimed to include both high- and low-impact grants. The key messages are as follows: 1) The cases reveal that a large and diverse range of impacts arose from the 29 grants studied. 2) There are variations between the impacts derived from basic biomedical and clinical research. 3) There is no correlation between knowledge production and wider impacts 4) The majority of economic impacts identified come from a minority of projects. 5) We identified factors that appear to be associated with high and low impact. This report presents the key observations of the study and an overview of the methods involved. It has been written for funders of biomedical and health research and health services, health researchers, and policy makers in those fields. It will also be of interest to those involved in research and impact evaluation.This study was initiated with internal funding from RAND Europe and HERG, with continuing funding from the UK National Institute for Health Research, the Canadian Institutes of Health Research, the Heart and Stroke Foundation of Canada and the National Heart Foundation of Australia. The UK Stroke Association and the British Heart Foundation provided support in kind through access to their archives
Recommended from our members
Project Retrosight. Understanding the returns from cardiovascular and stroke research: Case Studies
Copyright @ 2011 RAND Europe. All rights reserved. The full text article is available via the link below.This project explores the impacts arising from cardiovascular and stroke research funded 15-20 years ago and attempts to draw out aspects of the research, researcher or environment that are associated with high or low impact. The project is a case study-based review of 29 cardiovascular and stroke research grants, funded in Australia, Canada and UK between 1989 and 1993. The case studies focused on the individual grants but considered the development of the investigators and ideas involved in the research projects from initiation to the present day. Grants were selected through a stratified random selection approach that aimed to include both high- and low-impact grants. The key messages are as follows: 1) The cases reveal that a large and diverse range of impacts arose from the 29 grants studied. 2) There are variations between the impacts derived from basic biomedical and clinical research. 3) There is no correlation between knowledge production and wider impacts 4) The majority of economic impacts identified come from a minority of projects. 5) We identified factors that appear to be associated with high and low impact. This report presents the key observations of the study and an overview of the methods involved. It has been written for funders of biomedical and health research and health services, health researchers, and policy makers in those fields. It will also be of interest to those involved in research and impact evaluation.This study was initiated with internal funding from RAND Europe and HERG, with continuing funding from the UK National Institute for Health Research, the Canadian Institutes of Health Research, the Heart and Stroke Foundation of Canada and the National Heart Foundation of Australia. The UK Stroke Association and the British Heart Foundation provided support in kind through access to their archives
Benefits from clinicians and healthcare organisations engaging in research
In Editor’s Choice, Godlee supports and re-emphasises the positive points about National Institute for Health Research (NIHR) clinical research networks that are made in Gulland’s article.1 2 We welcome this support for research networks and for the part they can play in a more fully integrated research and healthcare system. Research engagement by clinicians and healthcare organisations is widely held to improve health services performance. However, we found the issue to be complex in our review conducted for the NIHR Health Services and Delivery Research (HS&DR) Programme in 2012-13.3 Thirty three papers were included in the analysis, and 28 were positive about improved performance, although only seven identified improved outcomes rather than improved processes. Diverse mechanisms contributed to these improvements. In a subsequent article we consider more recent evidence,4 including that UK NHS trusts active in research have lower risk adjusted mortality for acute admissions.5 Increased attention to this issue covers not only clinician participation but also organisational developments in the NIHR and NHS, such as Collaborations for Leadership in Applied Health Research and Care (CLAHRC) and Academic Health Science Networks (AHSNs).6 7 These seek to promote better integration of research and healthcare systems by strengthening research networks, developing research capacity, and ensuring that healthcare organisations (both providers and commissioners) see research as an integral component of their overall structure. Such initiatives need to be linked to further empirical analysis that considers not only the research engagement of all relevant actors but also the organisational determinants of the impact on practice of such engagement
- …