8,871 research outputs found
The metric tide: report of the independent review of the role of metrics in research assessment and management
This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration.
This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture.
The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises
Recommended from our members
Examining the Impact of a Consensus Approach to Content Alignment Studies
Although both content alignment and standard-setting procedures rely on content-expert panel judgements, only the latter employs discussion among panel members. This study employed a modified form of the Webb methodology to examine content alignment for twelve tests administered as part of the Massachusetts Comprehensive Assessment System (MCAS). This modification required panel members to discuss items for which there was no consensus regarding the item’s depth of knowledge or targeted standard. After the discussion, panel members were allowed to change their original ratings. The number of changes that occurred were analyzed considering the number of items discussed and the size of the panel. Moreover, we evaluated the impact these changes had on the overall judgments of alignment as reported by Webb’s Web Alignment Tool (WAT). Findings suggest that discussion among panel members between rating rounds positively increased agreement among panel members’ ratings but had minimal effects on the overall judgments of content alignment for 11 of the 12 tests evaluated
Peer review and citation data in predicting university rankings, a large-scale analysis
Most Performance-based Research Funding Systems (PRFS) draw on peer review and bibliometric indicators, two different method- ologies which are sometimes combined. A common argument against the use of indicators in such research evaluation exercises is their low corre- lation at the article level with peer review judgments. In this study, we analyse 191,000 papers from 154 higher education institutes which were peer reviewed in a national research evaluation exercise. We combine these data with 6.95 million citations to the original papers. We show that when citation-based indicators are applied at the institutional or departmental level, rather than at the level of individual papers, surpris- ingly large correlations with peer review judgments can be observed, up to r <= 0.802, n = 37, p < 0.001 for some disciplines. In our evaluation of ranking prediction performance based on citation data, we show we can reduce the mean rank prediction error by 25% compared to previous work. This suggests that citation-based indicators are sufficiently aligned with peer review results at the institutional level to be used to lessen the overall burden of peer review on national evaluation exercises leading to considerable cost savings
Assessing evaluation procedures for individual researchers: the case of the Italian National Scientific Qualification
The Italian National Scientific Qualification (ASN) was introduced as a
prerequisite for applying for tenured associate or full professor positions at
state-recognized universities. The ASN is meant to attest that an individual
has reached a suitable level of scientific maturity to apply for professorship
positions. A five member panel, appointed for each scientific discipline, is in
charge of evaluating applicants by means of quantitative indicators of impact
and productivity, and through an assessment of their research profile. Many
concerns were raised on the appropriateness of the evaluation criteria, and in
particular on the use of bibliometrics for the evaluation of individual
researchers. Additional concerns were related to the perceived poor quality of
the final evaluation reports. In this paper we assess the ASN in terms of
appropriateness of the applied methodology, and the quality of the feedback
provided to the applicants. We argue that the ASN is not fully compliant with
the best practices for the use of bibliometric indicators for the evaluation of
individual researchers; moreover, the quality of final reports varies
considerably across the panels, suggesting that measures should be put in place
to prevent sloppy practices in future ASN rounds
In which fields do higher impact journals publish higher quality articles?
The Journal Impact Factor and other indicators that assess the average
citation rate of articles in a journal are consulted by many academics and
research evaluators, despite initiatives against overreliance on them. Despite
this, there is limited evidence about the extent to which journal impact
indicators in any field relates to human judgements about the journals or their
articles. In response, we compared average citation rates of journals against
expert judgements of their articles in all fields of science. We used
preliminary quality scores for 96,031 articles published 2014-18 from the UK
Research Excellence Framework (REF) 2021. We show that whilst there is a
positive correlation between expert judgements of article quality and average
journal impact in all fields of science, it is very weak in many fields and is
never strong. The strength of the correlation varies from 0.11 to 0.43 for the
27 broad fields of Scopus. The highest correlation for the 94 Scopus narrow
fields with at least 750 articles was only 0.54, for Infectious Diseases, and
there was only one negative correlation, for the mixed category Computer
Science (all). The results suggest that the average citation impact of a
Scopus-indexed journal is never completely irrelevant to the quality of an
article, even though it is never a strong indicator of article quality
Peer review in an Era of Evaluation
This open access volume explores peer review in the scientific community and academia. While peer review is as old as modern science itself, recent changes in the evaluation culture of higher education systems have increased the use of peer review, and its purposes, forms and functions have become more diversified. This book put together a comprehensive set of conceptual and empirical contributions on various peer review practices with relevance for the scientific community and higher education institutions worldwide. Consisting of three parts, the editors and contributors examine the history, problems and developments of peer review, as well as the specificities of various peer review practices. In doing so, this book gives an overview on and examine peer review , and asks how it can move forward. This is an open access book
- …