4,833 research outputs found

    A categorization of arguments for counting methods for publication and citation indicators

    Get PDF
    Most publication and citation indicators are based on datasets with multi-authored publications and thus a change in counting method will often change the value of an indicator. Therefore it is important to know why a specific counting method has been applied. I have identified arguments for counting methods in a sample of 32 bibliometric studies published in 2016 and compared the result with discussions of arguments for counting methods in three older studies. Based on the underlying logics of the arguments I have arranged the arguments in four groups. Group 1 focuses on arguments related to what an indicator measures, Group 2 on the additivity of a counting method, Group 3 on pragmatic reasons for the choice of counting method, and Group 4 on an indicator's influence on the research community or how it is perceived by researchers. This categorization can be used to describe and discuss how bibliometric studies with publication and citation indicators argue for counting methods

    Validation of counting methods in bibliometrics

    Get PDF
    The discussion about counting methods in bibliometrics is often reduced to the choice between full and fractional counting. However, several studies document that this distinction is too simple. The aim of the present study is to give an overview of counting methods in the bibliometric literature and to provide insight into their properties and use. A mix of methods is used. In the preliminary results, a literature review covering 1970-2018 identified 29 original counting methods. Seventeen were introduced in the period 2010-2018. Twenty-one of the 29 counting methods are rank-dependent and fractionalized meaning that the authors of a publications share 1 credit but do not receive equal shares, for example harmonic counting. The internal and external validation of the counting methods are assessed. Three criteria for well-constructed bibliometric indicators - adequacy, sensitivity, and homogeneity - are used to assess the internal validity. Regarding the external validation of the counting methods, it is investigated whether the intentions in the studies that introduced the 29 counting methods comply with the subsequent use of the counting methods. This study has the potential to give a solid foundation for the use of and discussion about counting methods.Comment: Preprint: Author's manuscript submitted to the conference STI2020. Due to the Corona virus, STI2020 was postponed until September 2021. All submissions were returned to the authors before peer revie

    On Fractional Approach to Analysis of Linked Networks

    Full text link
    In this paper, we present the outer product decomposition of a product of compatible linked networks. It provides a foundation for the fractional approach in network analysis. We discuss the standard and Newman's normalization of networks. We propose some alternatives for fractional bibliographic coupling measures

    Applied Evaluative Informetrics: Part 1

    Full text link
    This manuscript is a preprint version of Part 1 (General Introduction and Synopsis) of the book Applied Evaluative Informetrics, to be published by Springer in the summer of 2017. This book presents an introduction to the field of applied evaluative informetrics, and is written for interested scholars and students from all domains of science and scholarship. It sketches the field's history, recent achievements, and its potential and limits. It explains the notion of multi-dimensional research performance, and discusses the pros and cons of 28 citation-, patent-, reputation- and altmetrics-based indicators. In addition, it presents quantitative research assessment as an evaluation science, and focuses on the role of extra-informetric factors in the development of indicators, and on the policy context of their application. It also discusses the way forward, both for users and for developers of informetric tools.Comment: The posted version is a preprint (author copy) of Part 1 (General Introduction and Synopsis) of a book entitled Applied Evaluative Bibliometrics, to be published by Springer in the summer of 201

    An empirical review of the different variants of the Probabilistic Affinity Index as applied to scientific collaboration

    Full text link
    Responsible indicators are crucial for research assessment and monitoring. Transparency and accuracy of indicators are required to make research assessment fair and ensure reproducibility. However, sometimes it is difficult to conduct or replicate studies based on indicators due to the lack of transparency in conceptualization and operationalization. In this paper, we review the different variants of the Probabilistic Affinity Index (PAI), considering both the conceptual and empirical underpinnings. We begin with a review of the historical development of the indicator and the different alternatives proposed. To demonstrate the utility of the indicator, we demonstrate the application of PAI to identifying preferred partners in scientific collaboration. A streamlined procedure is provided, to demonstrate the variations and appropriate calculations. We then compare the results of implementation for five specific countries involved in international scientific collaboration. Despite the different proposals on its calculation, we do not observe large differences between the PAI variants, particularly with respect to country size. As with any indicator, the selection of a particular variant is dependent on the research question. To facilitate appropriate use, we provide recommendations for the use of the indicator given specific contexts.Comment: 35 pages, 3 figures, 5 table

    Publication counting methods for a national research evaluation exercise

    Get PDF
    This work was supported by the DIALOG Program (Grant name “Research into Excellence Patterns in Science and Art”) financed by the Ministry of Science and Higher Education in Poland.In this paper, we investigate the effects of using four methods of publication counting (complete, whole, fractional, square root fractional) and limiting the number of publications (at researcher and institution levels) on the results of a national research evaluation exercise across fields using Polish data. We use bibliographic information on 0.58 million publications from the 2013–2016 period. Our analysis reveals that the largest effects are in those fields within which a variety publication and cooperation patterns can be observed (e.g. in Physical sciences or History and archeology). We argue that selecting the publication counting method for national evaluation purposes needs to take into account the current situation in the given country in terms of the excellence of research outcomes, level of internal, external and international collaboration, and publication patterns in the various fields of sciences. Our findings show that the social sciences and humanities are not significantly influenced by the different publication counting methods and limiting the number of publications included in the evaluation, as publication patterns in these fields are quite different from those observed in the so-called hard sciences. When discussing the goals of any national research evaluation system, we should be aware that the ways of achieving these goals are closely related to the publication counting method, which can serve as incentives for certain publication practices
    • …
    corecore