38,119 research outputs found

    The metric tide: report of the independent review of the role of metrics in research assessment and management

    Get PDF
    This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises

    Applied Evaluative Informetrics: Part 1

    Full text link
    This manuscript is a preprint version of Part 1 (General Introduction and Synopsis) of the book Applied Evaluative Informetrics, to be published by Springer in the summer of 2017. This book presents an introduction to the field of applied evaluative informetrics, and is written for interested scholars and students from all domains of science and scholarship. It sketches the field's history, recent achievements, and its potential and limits. It explains the notion of multi-dimensional research performance, and discusses the pros and cons of 28 citation-, patent-, reputation- and altmetrics-based indicators. In addition, it presents quantitative research assessment as an evaluation science, and focuses on the role of extra-informetric factors in the development of indicators, and on the policy context of their application. It also discusses the way forward, both for users and for developers of informetric tools.Comment: The posted version is a preprint (author copy) of Part 1 (General Introduction and Synopsis) of a book entitled Applied Evaluative Bibliometrics, to be published by Springer in the summer of 201

    The pros and cons of the use of altmetrics in research assessment

    Get PDF
    © 2020 The Authors. Published by Levi Library Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: http://doi.org/10.29024/sar.10Many indicators derived from the web have been proposed to supplement citation-based indicators in support of research assessments. These indicators, often called altmetrics, are available commercially from Altmetric.com and Elsevier’s Plum Analytics or can be collected directly. These organisations can also deliver altmetrics to support institutional selfevaluations. The potential advantages of altmetrics for research evaluation are that they may reflect important non-academic impacts and may appear before citations when an article is published, thus providing earlier impact evidence. Their disadvantages often include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact. Despite these limitations, altmetrics have been widely adopted by publishers, apparently to give authors, editors and readers insights into the level of interest in recently published articles. This article summarises evidence for and against extending the adoption of altmetrics to research evaluations. It argues that whilst systematicallygathered altmetrics are inappropriate for important formal research evaluations, they can play a role in some other contexts. They can be informative when evaluating research units that rarely produce journal articles, when seeking to identify evidence of novel types of impact during institutional or other self-evaluations, and when selected by individuals or groups to support narrative-based non-academic claims. In addition, Mendeley reader counts are uniquely valuable as early (mainly) scholarly impact indicators to replace citations when gaming is not possible and early impact evidence is needed. Organisations using alternative indicators need recruit or develop in-house expertise to ensure that they are not misused, however

    Research assessment in the humanities: problems and challenges

    Get PDF
    Research assessment is going to play a new role in the governance of universities and research institutions. Evaluation of results is evolving from a simple tool for resource allocation towards policy design. In this respect "measuring" implies a different approach to quantitative aspects as well as to an estimation of qualitative criteria that are difficult to define. Bibliometrics became so popular, in spite of its limits, just offering a simple solution to complex problems. The theory behind it is not so robust but available results confirm this method as a reasonable trade off between costs and benefits. Indeed there are some fields of science where quantitative indicators are very difficult to apply due to the lack of databases and data, in few words the credibility of existing information. Humanities and social sciences (HSS) need a coherent methodology to assess research outputs but current projects are not very convincing. The possibility of creating a shared ranking of journals by the value of their contents at either institutional, national or European level is not enough as it is raising the same bias as in the hard sciences and it does not solve the problem of the various types of outputs and the different, much longer time of creation and dissemination. The web (and web 2.0) represents a revolution in the communication of research results mainly in the HSS, and also their evaluation has to take into account this change. Furthermore, the increase of open access initiatives (green and gold road) offers a large quantity of transparent, verifiable data structured according to international standards that allow comparability beyond national limits and above all is independent from commercial agents. The pilot scheme carried out at the university of Milan for the Faculty of Humanities demonstrated that it is possible to build quantitative, on average more robust indicators, that could provide a proxy of research production and productiivity even in the HSS

    How journal rankings can suppress interdisciplinary research. A comparison between Innovation Studies and Business & Management

    Get PDF
    This study provides quantitative evidence on how the use of journal rankings can disadvantage interdisciplinary research in research evaluations. Using publication and citation data, it compares the degree of interdisciplinarity and the research performance of a number of Innovation Studies units with that of leading Business & Management schools in the UK. On the basis of various mappings and metrics, this study shows that: (i) Innovation Studies units are consistently more interdisciplinary in their research than Business & Management schools; (ii) the top journals in the Association of Business Schools' rankings span a less diverse set of disciplines than lower-ranked journals; (iii) this results in a more favourable assessment of the performance of Business & Management schools, which are more disciplinary-focused. This citation-based analysis challenges the journal ranking-based assessment. In short, the investigation illustrates how ostensibly 'excellence-based' journal rankings exhibit a systematic bias in favour of mono-disciplinary research. The paper concludes with a discussion of implications of these phenomena, in particular how the bias is likely to affect negatively the evaluation and associated financial resourcing of interdisciplinary research organisations, and may result in researchers becoming more compliant with disciplinary authority over time.Comment: 41 pages, 10 figure

    Utilising content marketing metrics and social networks for academic visibility

    Get PDF
    There are numerous assumptions on research evaluation in terms of quality and relevance of academic contributions. Researchers are becoming increasingly acquainted with bibliometric indicators, including; citation analysis, impact factor, h-index, webometrics and academic social networking sites. In this light, this chapter presents a review of these concepts as it considers relevant theoretical underpinnings that are related to the content marketing of scholars. Therefore, this contribution critically evaluates previous papers that revolve on the subject of academic reputation as it deliberates on the individual researchers’ personal branding. It also explains how metrics are currently being used to rank the academic standing of journals as well as higher educational institutions. In a nutshell, this chapter implies that the scholarly impact depends on a number of factors including accessibility of publications, peer review of academic work as well as social networking among scholars.peer-reviewe
    corecore