161 research outputs found

    The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation

    Get PDF
    The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a university's highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non-English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking, and a number of limitations of the ranking are pointed out

    The revised SNIP indicator of Elsevier's Scopus

    Full text link
    The modified SNIP indicator of Elsevier, as recently explained by Waltman et al. (2013) in this journal, solves some of the problems which Leydesdorff & Opthof (2010 and 2011) indicated in relation to the original SNIP indicator (Moed, 2010 and 2011). The use of an arithmetic average, however, remains unfortunate in the case of scientometric distributions because these can be extremely skewed (Seglen, 1992 and 1997). The new indicator cannot (or hardly) be reproduced independently when used for evaluation purposes, and remains in this sense opaque from the perspective of evaluated units and scholars.Comment: Letter to the Editor of the Journal of Informetrics (2013; in press

    An Integrated Impact Indicator (I3): A New Definition of "Impact" with Policy Relevance

    Full text link
    Allocation of research funding, as well as promotion and tenure decisions, are increasingly made using indicators and impact factors drawn from citations to published work. A debate among scientometricians about proper normalization of citation counts has resolved with the creation of an Integrated Impact Indicator (I3) that solves a number of problems found among previously used indicators. The I3 applies non-parametric statistics using percentiles, allowing highly-cited papers to be weighted more than less-cited ones. It further allows unbundling of venues (i.e., journals or databases) at the article level. Measures at the article level can be re-aggregated in terms of units of evaluation. At the venue level, the I3 creates a properly weighted alternative to the journal impact factor. I3 has the added advantage of enabling and quantifying classifications such as the six percentile rank classes used by the National Science Board's Science & Engineering Indicators.Comment: Research Evaluation (in press

    How to improve the prediction based on citation impact percentiles for years shortly after the publication date?

    Full text link
    The findings of Bornmann, Leydesdorff, and Wang (in press) revealed that the consideration of journal impact improves the prediction of long-term citation impact. This paper further explores the possibility of improving citation impact measurements on the base of a short citation window by the consideration of journal impact and other variables, such as the number of authors, the number of cited references, and the number of pages. The dataset contains 475,391 journal papers published in 1980 and indexed in Web of Science (WoS, Thomson Reuters), and all annual citation counts (from 1980 to 2010) for these papers. As an indicator of citation impact, we used percentiles of citations calculated using the approach of Hazen (1914). Our results show that citation impact measurement can really be improved: If factors generally influencing citation impact are considered in the statistical analysis, the explained variance in the long-term citation impact can be much increased. However, this increase is only visible when using the years shortly after publication but not when using later years.Comment: Accepted for publication in the Journal of Informetrics. arXiv admin note: text overlap with arXiv:1306.445
    • …
    corecore