61,184 research outputs found
Recommended from our members
A Proposed Standard for the Scholarly Citation of Quantitative Data
An essential aspect of science is a community of scholars cooperating and competing in the pursuit of common goals. A critical component of this community is the common language of and the universal standards for scholarly citation, credit attribution, and the location and retrieval of articles and books. We propose a similar universal standard for citing quantitative data that retains the advantages of print citations, adds other components made possible by, and needed due to, the digital form and systematic nature of quantitative data sets, and is consistent with most existing subfield-specific approaches. Although the digital library field includes numerous creative ideas, we limit ourselves to only those elements that appear ready for easy practical use by scientists, journal editors, publishers, librarians, and archivists.Governmen
The metric tide: report of the independent review of the role of metrics in research assessment and management
This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration.
This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture.
The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the âgamingâ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises
The pros and cons of the use of altmetrics in research assessment
© 2020 The Authors. Published by Levi Library Press. This is an open access article available under a Creative Commons licence.
The published version can be accessed at the following link on the publisherâs website: http://doi.org/10.29024/sar.10Many indicators derived from the web have been proposed to supplement citation-based
indicators in support of research assessments. These indicators, often called altmetrics, are
available commercially from Altmetric.com and Elsevierâs Plum Analytics or can be collected
directly. These organisations can also deliver altmetrics to support institutional selfevaluations. The potential advantages of altmetrics for research evaluation are that they
may reflect important non-academic impacts and may appear before citations when an
article is published, thus providing earlier impact evidence. Their disadvantages often
include susceptibility to gaming, data sparsity, and difficulties translating the evidence into
specific types of impact. Despite these limitations, altmetrics have been widely adopted by
publishers, apparently to give authors, editors and readers insights into the level of interest
in recently published articles. This article summarises evidence for and against extending
the adoption of altmetrics to research evaluations. It argues that whilst systematicallygathered altmetrics are inappropriate for important formal research evaluations, they can
play a role in some other contexts. They can be informative when evaluating research units
that rarely produce journal articles, when seeking to identify evidence of novel types of
impact during institutional or other self-evaluations, and when selected by individuals or
groups to support narrative-based non-academic claims. In addition, Mendeley reader
counts are uniquely valuable as early (mainly) scholarly impact indicators to replace
citations when gaming is not possible and early impact evidence is needed. Organisations
using alternative indicators need recruit or develop in-house expertise to ensure that they
are not misused, however
Theory and Practice of Data Citation
Citations are the cornerstone of knowledge propagation and the primary means
of assessing the quality of research, as well as directing investments in
science. Science is increasingly becoming "data-intensive", where large volumes
of data are collected and analyzed to discover complex patterns through
simulations and experiments, and most scientific reference works have been
replaced by online curated datasets. Yet, given a dataset, there is no
quantitative, consistent and established way of knowing how it has been used
over time, who contributed to its curation, what results have been yielded or
what value it has.
The development of a theory and practice of data citation is fundamental for
considering data as first-class research objects with the same relevance and
centrality of traditional scientific products. Many works in recent years have
discussed data citation from different viewpoints: illustrating why data
citation is needed, defining the principles and outlining recommendations for
data citation systems, and providing computational methods for addressing
specific issues of data citation.
The current panorama is many-faceted and an overall view that brings together
diverse aspects of this topic is still missing. Therefore, this paper aims to
describe the lay of the land for data citation, both from the theoretical (the
why and what) and the practical (the how) angle.Comment: 24 pages, 2 tables, pre-print accepted in Journal of the Association
for Information Science and Technology (JASIST), 201
Constructing experimental indicators for Open Access documents
The ongoing paradigm change in the scholarly publication system ('science is
turning to e-science') makes it necessary to construct alternative evaluation
criteria/metrics which appropriately take into account the unique
characteristics of electronic publications and other research output in digital
formats. Today, major parts of scholarly Open Access (OA) publications and the
self-archiving area are not well covered in the traditional citation and
indexing databases. The growing share and importance of freely accessible
research output demands new approaches/metrics for measuring and for evaluating
of these new types of scientific publications. In this paper we propose a
simple quantitative method which establishes indicators by measuring the
access/download pattern of OA documents and other web entities of a single web
server. The experimental indicators (search engine, backlink and direct access
indicator) are constructed based on standard local web usage data. This new
type of web-based indicator is developed to model the specific demand for
better study/evaluation of the accessibility, visibility and interlinking of
open accessible documents. We conclude that e-science will need new stable
e-indicators.Comment: 9 pages, 3 figure
Theoretical studies of the historical development of the accounting discipline: a review and evidence
Many existing studies of the development of accounting thought have either been atheoretical or have adopted Kuhn's model of scientific growth. The limitations of this 35-year-old model are discussed. Four different general neo-Kuhnian models of scholarly knowledge development are reviewed and compared with reference to an analytical matrix. The models are found to be mutually consistent, with each focusing on a different aspect of development. A composite model is proposed. Based on a hand-crafted database, author co-citation analysis is used to map empirically the entire literature structure of the accounting discipline during two consecutive time periods, 1972â81 and 1982â90. The changing structure of the accounting literature is interpreted using the proposed composite model of scholarly knowledge development
Applied Evaluative Informetrics: Part 1
This manuscript is a preprint version of Part 1 (General Introduction and
Synopsis) of the book Applied Evaluative Informetrics, to be published by
Springer in the summer of 2017. This book presents an introduction to the field
of applied evaluative informetrics, and is written for interested scholars and
students from all domains of science and scholarship. It sketches the field's
history, recent achievements, and its potential and limits. It explains the
notion of multi-dimensional research performance, and discusses the pros and
cons of 28 citation-, patent-, reputation- and altmetrics-based indicators. In
addition, it presents quantitative research assessment as an evaluation
science, and focuses on the role of extra-informetric factors in the
development of indicators, and on the policy context of their application. It
also discusses the way forward, both for users and for developers of
informetric tools.Comment: The posted version is a preprint (author copy) of Part 1 (General
Introduction and Synopsis) of a book entitled Applied Evaluative
Bibliometrics, to be published by Springer in the summer of 201
- âŠ