214 research outputs found
A critical cluster analysis of 44 indicators of author-level performance
This paper explores the relationship between author-level bibliometric
indicators and the researchers the "measure", exemplified across five academic
seniorities and four disciplines. Using cluster methodology, the disciplinary
and seniority appropriateness of author-level indicators is examined.
Publication and citation data for 741 researchers across Astronomy,
Environmental Science, Philosophy and Public Health was collected in Web of
Science (WoS). Forty-four indicators of individual performance were computed
using the data. A two-step cluster analysis using IBM SPSS version 22 was
performed, followed by a risk analysis and ordinal logistic regression to
explore cluster membership. Indicator scores were contextualized using the
individual researcher's curriculum vitae. Four different clusters based on
indicator scores ranked researchers as low, middle, high and extremely high
performers. The results show that different indicators were appropriate in
demarcating ranked performance in different disciplines. In Astronomy the h2
indicator, sum pp top prop in Environmental Science, Q2 in Philosophy and
e-index in Public Health. The regression and odds analysis showed individual
level indicator scores were primarily dependent on the number of years since
the researcher's first publication registered in WoS, number of publications
and number of citations. Seniority classification was secondary therefore no
seniority appropriate indicators were confidently identified. Cluster
methodology proved useful in identifying disciplinary appropriate indicators
providing the preliminary data preparation was thorough but needed to be
supplemented by other analyses to validate the results. A general disconnection
between the performance of the researcher on their curriculum vitae and the
performance of the researcher based on bibliometric indicators was observed.Comment: 28 pages, 7 tables, 2 figures, 2 appendice
A review of the characteristics of 108 author-level bibliometric indicators
An increasing demand for bibliometric assessment of individuals has led to a
growth of new bibliometric indicators as well as new variants or combinations
of established ones. The aim of this review is to contribute with objective
facts about the usefulness of bibliometric indicators of the effects of
publication activity at the individual level. This paper reviews 108 indicators
that can potentially be used to measure performance on the individual author
level, and examines the complexity of their calculations in relation to what
they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201
Just Pimping the CV? The Feasibility of Ready-to-use Bibliometric Indicators to Enrich Curriculum Vitae
This poster investigates if ready-to-use bibliometric indicators can be used by individual scholars to enrich their curriculum vitae. Selected indicators were tested in four different fields and across 5 different academic seniorities. The results show performance in bibliometric evaluation is highly individual and using indicators as "benchmarks" unwise. Further the simple calculation of cites per publication per years-since-first-publication is a more informative indicator than the ready-to-use ones and can also be used to estimate if it is at all worth the scholar's time to apply indicators to their CV.publishedye
Applying the Leiden Manifesto principles in practice:commonalities and differences in interpretation
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px 'Times New Roman'}
The Leiden Manifesto (LM) is changing how we think about and use metrics [1]. Bibliometric evaluation is explained as a combination of quantitative and qualitative methods, allowing the use of different metrics, disciplinary knowledge and research performance strategies. Both bibliometricians and consumers of bibliometrics are encouraged to communicate and use the LM principles to acknowledge what they know and do not know, what is measured and what is not measured, thus legitimizing the use of the metrics.
However, in our previous study, we observed that it is unclear how the LM principles should be interpreted [2, 3]. We suspect that subjective interpretations of the principles do not correlate. To investigate the reliability and validity of the LM, the present study presents a systematic review of bibliometric reports that apply the LM principles. Reports are retrieved from the LM blog [4], Scopus, Web of Science and Google Scholar. Each principle and its interpretation is coded in NVivo, whereafter we explore the degree of agreement in the interpretations across the reports.
We find that for some principles, e.g. principle 1, the interpretations are well aligned. For other principles, e.g. principle 3, the interpretations differ but may be seen as complementary. We also observe that interpretations can overlap and thus the redundancy of the principles needs to be further investigated, e.g. principle 3 and 6.
We conclude that at least for some of the LM principles, the reliability appears weak as the range of interpretations are wide, however complementary. Furthermore, some of the interpretations are applied for more principles, which may point to weak validity.
Further research on the reliability and the validity of the LM will be essential to establish guidance in implementing the LM in practice. </p
- …