230,525 research outputs found

    The metric tide: report of the independent review of the role of metrics in research assessment and management

    Get PDF
    This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises

    Throwing Out the Baby with the Bathwater: The Undesirable Effects of National Research Assessment Exercises on Research

    Get PDF
    The evaluation of the quality of research at a national level has become increasingly common. The UK has been at the forefront of this trend having undertaken many assessments since 1986, the latest being the “Research Excellence Framework” in 2014. The argument of this paper is that, whatever the intended results in terms of evaluating and improving research, there have been many, presumably unintended, results that are highly undesirable for research and the university community more generally. We situate our analysis using Bourdieu’s theory of cultural reproduction and then focus on the peculiarities of the 2008 RAE and the 2014 REF the rules of which allowed for, and indeed encouraged, significant game-playing on the part of striving universities. We conclude with practical recommendations to maintain the general intention of research assessment without the undesirable side-effects

    How journal rankings can suppress interdisciplinary research. A comparison between Innovation Studies and Business & Management

    Get PDF
    This study provides quantitative evidence on how the use of journal rankings can disadvantage interdisciplinary research in research evaluations. Using publication and citation data, it compares the degree of interdisciplinarity and the research performance of a number of Innovation Studies units with that of leading Business & Management schools in the UK. On the basis of various mappings and metrics, this study shows that: (i) Innovation Studies units are consistently more interdisciplinary in their research than Business & Management schools; (ii) the top journals in the Association of Business Schools' rankings span a less diverse set of disciplines than lower-ranked journals; (iii) this results in a more favourable assessment of the performance of Business & Management schools, which are more disciplinary-focused. This citation-based analysis challenges the journal ranking-based assessment. In short, the investigation illustrates how ostensibly 'excellence-based' journal rankings exhibit a systematic bias in favour of mono-disciplinary research. The paper concludes with a discussion of implications of these phenomena, in particular how the bias is likely to affect negatively the evaluation and associated financial resourcing of interdisciplinary research organisations, and may result in researchers becoming more compliant with disciplinary authority over time.Comment: 41 pages, 10 figure

    Research excellence framework : second consultation on the assessment and funding of research

    Get PDF

    Webometric analysis of departments of librarianship and information science: a follow-up study

    Get PDF
    This paper reports an analysis of the websites of UK departments of library and information science. Inlink counts of these websites revealed no statistically significant correlation with the quality of the research carried out by these departments, as quantified using departmental grades in the 2001 Research Assessment Exercise and citations in Google Scholar to publications submitted for that Exercise. Reasons for this lack of correlation include: difficulties in disambiguating departmental websites from larger institutional structures; the relatively small amount of research-related material in departmental websites; and limitations in the ways that current Web search engines process linkages to URLs. It is concluded that departmental-level webometric analyses do not at present provide an appropriate technique for evaluating academic research quality, and, more generally, that standards are needed for the formatting of URLs if inlinks are to become firmly established as a tool for website analysis

    Citation gaming induced by bibliometric evaluation: a country-level comparative analysis

    Get PDF
    It is several years since national research evaluation systems around the globe started making use of quantitative indicators to measure the performance of researchers. Nevertheless, the effects on these systems on the behavior of the evaluated researchers are still largely unknown. We attempt to shed light on this topic by investigating how Italian researchers reacted to the introduction in 2011 of national regulations in which key passages of professional careers are governed by bibliometric indicators. A new inwardness measure, able to gauge the degree of scientific self-referentiality of a country, is defined as the proportion of citations coming from the country itself compared to the total number of citations gathered by the country. Compared to the trends of the other G10 countries in the period 2000-2016, Italy's inwardness shows a net increase after the introduction of the new evaluation rules. Indeed, globally and also for a large majority of the research fields, Italy became the European country with the highest inwardness. Possible explanations are proposed and discussed, concluding that the observed trends are strongly suggestive of a generalized strategic use of citations, both in the form of author self-citations and of citation clubs. We argue that the Italian case offers crucial insights on the constitutive effects of evaluation systems. As such, it could become a paradigmatic case in the debate about the use of indicators in science-policy contexts

    Systematic analysis of agreement between metrics and peer review in the UK REF

    Get PDF
    When performing a national research assessment, some countries rely on citation metrics whereas others, such as the UK, primarily use peer review. In the influential Metric Tide report, a low agreement between metrics and peer review in the UK Research Excellence Framework (REF) was found. However, earlier studies observed much higher agreement between metrics and peer review in the REF and argued in favour of using metrics. This shows that there is considerable ambiguity in the discussion on agreement between metrics and peer review. We provide clarity in this discussion by considering four important points: (1) the level of aggregation of the analysis; (2) the use of either a size-dependent or a size-independent perspective; (3) the suitability of different measures of agreement; and (4) the uncertainty in peer review. In the context of the REF, we argue that agreement between metrics and peer review should be assessed at the institutional level rather than at the publication level. Both a size-dependent and a size-independent perspective are relevant in the REF. The interpretation of correlations may be problematic and as an alternative we therefore use measures of agreement that are based on the absolute or relative differences between metrics and peer review. To get an idea of the uncertainty in peer review, we rely on a model to bootstrap peer review outcomes. We conclude that particularly in Physics, Clinical Medicine, and Public Health, metrics agree quite well with peer review and may offer an alternative to peer review

    Measuring What’s Valued Or Valuing What’s Measured? Knowledge Production and the Research Assessment Exercise

    Get PDF
    Power is everywhere. But what is it and how does it infuse personal and institutional relationships in higher education? Power, Knowledge and the Academy: The Institutional is Political takes a close-up and critical look at both the elusive and blatant workings and consequences of power in a range of everyday sites in universities. Authors work with multi-layered conceptions of power to disturb the idea of the academy as a haven of detached reason and instead reveal the ways in which power shapes personal and institutional relationships, the production of knowledge and the construction of academic careers. Chapters focus on, among other areas, student-supervisor relationships, personal PhD journeys, power in research teams, networking, the Research Assessment Exercise in the UK, and the power to construct knowledge in literature reviews. This chapter does not address which mechanism of research assessment provides a more truthful account of the value of a set of ‘research outputs’. Instead, it focuses on the power of any such mechanism to reinforce particular values and to inscribe hierarchies regarding knowledge. Regardless of what replaces it, the UK's RAE will have been productive, not just reflective of academic values. Some of the negative consequences of the RAE for UK academic life are considered, focusing on the operation of power through processes of knowledge production

    Higher education reform: getting the incentives right

    Get PDF
    This study is a joint effort by the Netherlands Bureau for Economic Policy Analysis (CPB) and the Center for Higher Education Policy Studies. It analyses a number of `best practicesÂż where the design of financial incentives working on the system level of higher education is concerned. In Chapter 1, an overview of some of the characteristics of the Dutch higher education sector is presented. Chapter 2 is a refresher on the economics of higher education. Chapter 3 is about the Australian Higher Education Contribution Scheme (HECS). Chapter 4 is about tuition fees and admission policies in US universities. Chapter 5 looks at the funding of Danish universities through the so-called taximeter-model, that links funding to student performance. Chapter 6 deals with research funding in the UK university system, where research assessments exercises underlie the funding decisions. In Chapter 7 we study the impact of university-industry ties on academic research by examining the US policies on increasing knowledge transfer between universities and the private sector. Finally, Chapter 8 presents food for thought for Dutch policymakers: what lessons can be learned from our international comparison
    • 

    corecore