7,305 research outputs found

    Field-normalized citation impact indicators and the choice of an appropriate counting method

    Full text link
    Bibliometric studies often rely on field-normalized citation impact indicators in order to make comparisons between scientific fields. We discuss the connection between field normalization and the choice of a counting method for handling publications with multiple co-authors. Our focus is on the choice between full counting and fractional counting. Based on an extensive theoretical and empirical analysis, we argue that properly field-normalized results cannot be obtained when full counting is used. Fractional counting does provide results that are properly field normalized. We therefore recommend the use of fractional counting in bibliometric studies that require field normalization, especially in studies at the level of countries and research organizations. We also compare different variants of fractional counting. In general, it seems best to use either the author-level or the address-level variant of fractional counting

    A review of the literature on citation impact indicators

    Full text link
    Citation impact indicators nowadays play an important role in research evaluation, and consequently these indicators have received a lot of attention in the bibliometric and scientometric literature. This paper provides an in-depth review of the literature on citation impact indicators. First, an overview is given of the literature on bibliographic databases that can be used to calculate citation impact indicators (Web of Science, Scopus, and Google Scholar). Next, selected topics in the literature on citation impact indicators are reviewed in detail. The first topic is the selection of publications and citations to be included in the calculation of citation impact indicators. The second topic is the normalization of citation impact indicators, in particular normalization for field differences. Counting methods for dealing with co-authored publications are the third topic, and citation impact indicators for journals are the last topic. The paper concludes by offering some recommendations for future research

    A categorization of arguments for counting methods for publication and citation indicators

    Get PDF
    Most publication and citation indicators are based on datasets with multi-authored publications and thus a change in counting method will often change the value of an indicator. Therefore it is important to know why a specific counting method has been applied. I have identified arguments for counting methods in a sample of 32 bibliometric studies published in 2016 and compared the result with discussions of arguments for counting methods in three older studies. Based on the underlying logics of the arguments I have arranged the arguments in four groups. Group 1 focuses on arguments related to what an indicator measures, Group 2 on the additivity of a counting method, Group 3 on pragmatic reasons for the choice of counting method, and Group 4 on an indicator's influence on the research community or how it is perceived by researchers. This categorization can be used to describe and discuss how bibliometric studies with publication and citation indicators argue for counting methods

    A review of the characteristics of 108 author-level bibliometric indicators

    Get PDF
    An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201

    Investigating the interplay between fundamentals of national research systems: performance, investments and international collaborations

    Get PDF
    We discuss, at the macro-level of nations, the contribution of research funding and rate of international collaboration to research performance, with important implications for the science of science policy. In particular, we cross-correlate suitable measures of these quantities with a scientometric-based assessment of scientific success, studying both the average performance of nations and their temporal dynamics in the space defined by these variables during the last decade. We find significant differences among nations in terms of efficiency in turning (financial) input into bibliometrically measurable output, and we confirm that growth of international collaboration positively correlate with scientific success, with significant benefits brought by EU integration policies. Various geo-cultural clusters of nations naturally emerge from our analysis. We critically discuss the possible factors that potentially determine the observed patterns

    A critical cluster analysis of 44 indicators of author-level performance

    Full text link
    This paper explores the relationship between author-level bibliometric indicators and the researchers the "measure", exemplified across five academic seniorities and four disciplines. Using cluster methodology, the disciplinary and seniority appropriateness of author-level indicators is examined. Publication and citation data for 741 researchers across Astronomy, Environmental Science, Philosophy and Public Health was collected in Web of Science (WoS). Forty-four indicators of individual performance were computed using the data. A two-step cluster analysis using IBM SPSS version 22 was performed, followed by a risk analysis and ordinal logistic regression to explore cluster membership. Indicator scores were contextualized using the individual researcher's curriculum vitae. Four different clusters based on indicator scores ranked researchers as low, middle, high and extremely high performers. The results show that different indicators were appropriate in demarcating ranked performance in different disciplines. In Astronomy the h2 indicator, sum pp top prop in Environmental Science, Q2 in Philosophy and e-index in Public Health. The regression and odds analysis showed individual level indicator scores were primarily dependent on the number of years since the researcher's first publication registered in WoS, number of publications and number of citations. Seniority classification was secondary therefore no seniority appropriate indicators were confidently identified. Cluster methodology proved useful in identifying disciplinary appropriate indicators providing the preliminary data preparation was thorough but needed to be supplemented by other analyses to validate the results. A general disconnection between the performance of the researcher on their curriculum vitae and the performance of the researcher based on bibliometric indicators was observed.Comment: 28 pages, 7 tables, 2 figures, 2 appendice

    Constructing bibliometric networks: A comparison between full and fractional counting

    Full text link
    The analysis of bibliometric networks, such as co-authorship, bibliographic coupling, and co-citation networks, has received a considerable amount of attention. Much less attention has been paid to the construction of these networks. We point out that different approaches can be taken to construct a bibliometric network. Normally the full counting approach is used, but we propose an alternative fractional counting approach. The basic idea of the fractional counting approach is that each action, such as co-authoring or citing a publication, should have equal weight, regardless of for instance the number of authors, citations, or references of a publication. We present two empirical analyses in which the full and fractional counting approaches yield very different results. These analyses deal with co-authorship networks of universities and bibliographic coupling networks of journals. Based on theoretical considerations and on the empirical analyses, we conclude that for many purposes the fractional counting approach is preferable over the full counting one

    The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation

    Get PDF
    The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a university's highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non-English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking, and a number of limitations of the ranking are pointed out
    corecore