88 research outputs found

    Caveats for the journal and field normalizations in the CWTS ("Leiden") evaluations of research performance

    Full text link
    The Center for Science and Technology Studies at Leiden University advocates the use of specific normalizations for assessing research performance with reference to a world average. The Journal Citation Score (JCS) and Field Citation Score (FCS) are averaged for the research group or individual researcher under study, and then these values are used as denominators of the (mean) Citations per publication (CPP). Thus, this normalization is based on dividing two averages. This procedure only generates a legitimate indicator in the case of underlying normal distributions. Given the skewed distributions under study, one should average the observed versus expected values which are to be divided first for each publication. We show the effects of the Leiden normalization for a recent evaluation where we happened to have access to the underlying data

    Normalization at the field level: fractional counting of citations

    Full text link
    Van Raan et al. (2010; arXiv:1003.2113) have proposed a new indicator (MNCS) for field normalization. Since field normalization is also used in the Leiden Rankings of universities, we elaborate our critique of journal normalization in Opthof & Leydesdorff (2010; arXiv:1002.2769) in this rejoinder concerning field normalization. Fractional citation counting thoroughly solves the issue of normalization for differences in citation behavior among fields. This indicator can also be used to obtain a normalized impact factor

    Does the specification of uncertainty hurt the progress of scientometrics?

    Full text link
    In "Caveats for using statistical significance tests in research assessments,"--Journal of Informetrics 7(1)(2013) 50-62, available at arXiv:1112.2516 -- Schneider (2013) focuses on Opthof & Leydesdorff (2010) as an example of the misuse of statistics in the social sciences. However, our conclusions are theoretical since they are not dependent on the use of one statistics or another. We agree with Schneider insofar as he proposes to develop further statistical instruments (such as effect sizes). Schneider (2013), however, argues on meta-theoretical grounds against the specification of uncertainty because, in his opinion, the presence of statistics would legitimate decision-making. We disagree: uncertainty can also be used for opening a debate. Scientometric results in which error bars are suppressed for meta-theoretical reasons should not be trusted

    Rivals for the crown: Reply to Opthof and Leydesdorff

    Get PDF
    We reply to the criticism of Opthof and Leydesdorff [arXiv:1002.2769] on the way in which our institute applies journal and field normalizations to citation counts. We point out why we believe most of the criticism is unjustified, but we also indicate where we think Opthof and Leydesdorff raise a valid point

    Problems with SNIP

    Get PDF

    The revised SNIP indicator of Elsevier's Scopus

    Full text link
    The modified SNIP indicator of Elsevier, as recently explained by Waltman et al. (2013) in this journal, solves some of the problems which Leydesdorff & Opthof (2010 and 2011) indicated in relation to the original SNIP indicator (Moed, 2010 and 2011). The use of an arithmetic average, however, remains unfortunate in the case of scientometric distributions because these can be extremely skewed (Seglen, 1992 and 1997). The new indicator cannot (or hardly) be reproduced independently when used for evaluation purposes, and remains in this sense opaque from the perspective of evaluated units and scholars.Comment: Letter to the Editor of the Journal of Informetrics (2013; in press
    corecore