18,466 research outputs found
The pros and cons of the use of altmetrics in research assessment
© 2020 The Authors. Published by Levi Library Press. This is an open access article available under a Creative Commons licence.
The published version can be accessed at the following link on the publisher’s website: http://doi.org/10.29024/sar.10Many indicators derived from the web have been proposed to supplement citation-based
indicators in support of research assessments. These indicators, often called altmetrics, are
available commercially from Altmetric.com and Elsevier’s Plum Analytics or can be collected
directly. These organisations can also deliver altmetrics to support institutional selfevaluations. The potential advantages of altmetrics for research evaluation are that they
may reflect important non-academic impacts and may appear before citations when an
article is published, thus providing earlier impact evidence. Their disadvantages often
include susceptibility to gaming, data sparsity, and difficulties translating the evidence into
specific types of impact. Despite these limitations, altmetrics have been widely adopted by
publishers, apparently to give authors, editors and readers insights into the level of interest
in recently published articles. This article summarises evidence for and against extending
the adoption of altmetrics to research evaluations. It argues that whilst systematicallygathered altmetrics are inappropriate for important formal research evaluations, they can
play a role in some other contexts. They can be informative when evaluating research units
that rarely produce journal articles, when seeking to identify evidence of novel types of
impact during institutional or other self-evaluations, and when selected by individuals or
groups to support narrative-based non-academic claims. In addition, Mendeley reader
counts are uniquely valuable as early (mainly) scholarly impact indicators to replace
citations when gaming is not possible and early impact evidence is needed. Organisations
using alternative indicators need recruit or develop in-house expertise to ensure that they
are not misused, however
Introduction: Future pathways for science policy and research assessment: metrics vs peer review, quality vs impact
Copyright @ 2007 Beech Tree PublishingThe idea for this special issue arose from observing contrary developments in the design of national research assessment schemes in the UK and Australia during 2006 and 2007. Alternative pathways were being forged, determined, on the one hand, by the perceived relative merits of 'metrics' (quantitative measures of research performance) and peer judgement and, on the other hand, by the value attached to scientific excellence ('quality') versus usefulness ('impact'). This special issue presents a broad range of provocative academic opinion on preferred future pathways for science policy and research assessment. It unpacks the apparent dichotomies of metrics vs peer review and quality vs impact, and considers the hazards of adopting research evaluation policies in isolation from wider developments in scientometrics (the science of research evaluation) and divorced from the practical experience of other nations (policy learning)
Does the public discuss other topics on climate change than researchers? A comparison of explorative networks based on author keywords and hashtags
Twitter accounts have already been used in many scientometric studies, but
the meaningfulness of the data for societal impact measurements in research
evaluation has been questioned. Earlier research focused on social media counts
and neglected the interactive nature of the data. We explore a new network
approach based on Twitter data in which we compare author keywords to hashtags
as indicators of topics. We analyze the topics of tweeted publications and
compare them with the topics of all publications (tweeted and not tweeted). Our
exploratory study is based on a comprehensive publication set of climate change
research. We are interested in whether Twitter data are able to reveal topics
of public discussions which can be separated from research-focused topics. We
find that the most tweeted topics regarding climate change research focus on
the consequences of climate change for humans. Twitter users are interested in
climate change publications which forecast effects of a changing climate on the
environment and to adaptation, mitigation and management issues rather than in
the methodology of climate-change research and causes of climate change. Our
results indicate that publications using scientific jargon are less likely to
be tweeted than publications using more general keywords. Twitter networks seem
to be able to visualize public discussions about specific topics.Comment: 31 pages, 1 table, and 7 figure
Reviewing, indicating, and counting books for modern research evaluation systems
In this chapter, we focus on the specialists who have helped to improve the
conditions for book assessments in research evaluation exercises, with
empirically based data and insights supporting their greater integration. Our
review highlights the research carried out by four types of expert communities,
referred to as the monitors, the subject classifiers, the indexers and the
indicator constructionists. Many challenges lie ahead for scholars affiliated
with these communities, particularly the latter three. By acknowledging their
unique, yet interrelated roles, we show where the greatest potential is for
both quantitative and qualitative indicator advancements in book-inclusive
evaluation systems.Comment: Forthcoming in Glanzel, W., Moed, H.F., Schmoch U., Thelwall, M.
(2018). Springer Handbook of Science and Technology Indicators. Springer Some
corrections made in subsection 'Publisher prestige or quality
Hybridization of multi-objective deterministic particle swarm with derivative-free local searches
The paper presents a multi-objective derivative-free and deterministic global/local hybrid algorithm for the efficient and effective solution of simulation-based design optimization (SBDO) problems. The objective is to show how the hybridization of two multi-objective derivative-free global and local algorithms achieves better performance than the separate use of the two algorithms in solving specific SBDO problems for hull-form design. The proposed method belongs to the class of memetic algorithms, where the global exploration capability of multi-objective deterministic particle swarm optimization is enriched by exploiting the local search accuracy of a derivative-free multi-objective line-search method. To the authors best knowledge, studies are still limited on memetic, multi-objective, deterministic, derivative-free, and evolutionary algorithms for an effective and efficient solution of SBDO for hull-form design. The proposed formulation manages global and local searches based on the hypervolume metric. The hybridization scheme uses two parameters to control the local search activation and the number of function calls used by the local algorithm. The most promising values of these parameters were identified using forty analytical tests representative of the SBDO problem of interest. The resulting hybrid algorithm was finally applied to two SBDO problems for hull-form design. For both analytical tests and SBDO problems, the hybrid method achieves better performance than its global and local counterparts
How journal rankings can suppress interdisciplinary research. A comparison between Innovation Studies and Business & Management
This study provides quantitative evidence on how the use of journal rankings
can disadvantage interdisciplinary research in research evaluations. Using
publication and citation data, it compares the degree of interdisciplinarity
and the research performance of a number of Innovation Studies units with that
of leading Business & Management schools in the UK. On the basis of various
mappings and metrics, this study shows that: (i) Innovation Studies units are
consistently more interdisciplinary in their research than Business &
Management schools; (ii) the top journals in the Association of Business
Schools' rankings span a less diverse set of disciplines than lower-ranked
journals; (iii) this results in a more favourable assessment of the performance
of Business & Management schools, which are more disciplinary-focused. This
citation-based analysis challenges the journal ranking-based assessment. In
short, the investigation illustrates how ostensibly 'excellence-based' journal
rankings exhibit a systematic bias in favour of mono-disciplinary research. The
paper concludes with a discussion of implications of these phenomena, in
particular how the bias is likely to affect negatively the evaluation and
associated financial resourcing of interdisciplinary research organisations,
and may result in researchers becoming more compliant with disciplinary
authority over time.Comment: 41 pages, 10 figure
- …