107 research outputs found

    Does the specification of uncertainty hurt the progress of scientometrics?

    Full text link
    In "Caveats for using statistical significance tests in research assessments,"--Journal of Informetrics 7(1)(2013) 50-62, available at arXiv:1112.2516 -- Schneider (2013) focuses on Opthof & Leydesdorff (2010) as an example of the misuse of statistics in the social sciences. However, our conclusions are theoretical since they are not dependent on the use of one statistics or another. We agree with Schneider insofar as he proposes to develop further statistical instruments (such as effect sizes). Schneider (2013), however, argues on meta-theoretical grounds against the specification of uncertainty because, in his opinion, the presence of statistics would legitimate decision-making. We disagree: uncertainty can also be used for opening a debate. Scientometric results in which error bars are suppressed for meta-theoretical reasons should not be trusted

    The concordance of field-normalized scores based on Web of Science and Microsoft Academic data: A case study in computer sciences

    Full text link
    In order to assess Microsoft Academic as a useful data source for evaluative bibliometrics it is crucial to know, if citation counts from Microsoft Academic could be used in common normalization procedures and whether the normalized scores agree with the scores calculated on the basis of established databases. To this end, we calculate the field-normalized citation scores of the publications of a computer science institute based on Microsoft Academic and the Web of Science and estimate the statistical concordance of the scores. Our results suggest that field-normalized citation scores can be calculated with Microsoft Academic and that these scores are in good agreement with the corresponding scores from the Web of Science.Comment: 10 pages, 2 figures, 1 tabl

    Universality of citation distributions revisited

    Get PDF
    Radicchi, Fortunato, and Castellano [arXiv:0806.0974, PNAS 105(45), 17268] claim that, apart from a scaling factor, all fields of science are characterized by the same citation distribution. We present a large-scale validation study of this universality-of-citation-distributions claim. Our analysis shows that claiming citation distributions to be universal for all fields of science is not warranted. Although many fields indeed seem to have fairly similar citation distributions, there are quite some exceptions as well. We also briefly discuss the consequences of our findings for the measurement of scientific impact using citation-based bibliometric indicators

    Rivals for the crown: Reply to Opthof and Leydesdorff

    Get PDF
    We reply to the criticism of Opthof and Leydesdorff [arXiv:1002.2769] on the way in which our institute applies journal and field normalizations to citation counts. We point out why we believe most of the criticism is unjustified, but we also indicate where we think Opthof and Leydesdorff raise a valid point

    The weakening relationship between the Impact Factor and papers' citations in the digital age

    Full text link
    Historically, papers have been physically bound to the journal in which they were published but in the electronic age papers are available individually, no longer tied to their respective journals. Hence, papers now can be read and cited based on their own merits, independently of the journal's physical availability, reputation, or Impact Factor. We compare the strength of the relationship between journals' Impact Factors and the actual citations received by their respective papers from 1902 to 2009. Throughout most of the 20th century, papers' citation rates were increasingly linked to their respective journals' Impact Factors. However, since 1990, the advent of the digital age, the strength of the relation between Impact Factors and paper citations has been decreasing. This decrease began sooner in physics, a field that was quicker to make the transition into the electronic domain. Furthermore, since 1990, the proportion of highly cited papers coming from highly cited journals has been decreasing, and accordingly, the proportion of highly cited papers not coming from highly cited journals has also been increasing. Should this pattern continue, it might bring an end to the use of the Impact Factor as a way to evaluate the quality of journals, papers and researchers.Comment: 14 pages, 5 figure

    Scopus's Source Normalized Impact per Paper (SNIP) versus a Journal Impact Factor based on Fractional Counting of Citations

    Full text link
    Impact factors (and similar measures such as the Scimago Journal Rankings) suffer from two problems: (i) citation behavior varies among fields of science and therefore leads to systematic differences, and (ii) there are no statistics to inform us whether differences are significant. The recently introduced SNIP indicator of Scopus tries to remedy the first of these two problems, but a number of normalization decisions are involved which makes it impossible to test for significance. Using fractional counting of citations-based on the assumption that impact is proportionate to the number of references in the citing documents-citations can be contextualized at the paper level and aggregated impacts of sets can be tested for their significance. It can be shown that the weighted impact of Annals of Mathematics (0.247) is not so much lower than that of Molecular Cell (0.386) despite a five-fold difference between their impact factors (2.793 and 13.156, respectively)

    An Integrated Impact Indicator (I3): A New Definition of "Impact" with Policy Relevance

    Full text link
    Allocation of research funding, as well as promotion and tenure decisions, are increasingly made using indicators and impact factors drawn from citations to published work. A debate among scientometricians about proper normalization of citation counts has resolved with the creation of an Integrated Impact Indicator (I3) that solves a number of problems found among previously used indicators. The I3 applies non-parametric statistics using percentiles, allowing highly-cited papers to be weighted more than less-cited ones. It further allows unbundling of venues (i.e., journals or databases) at the article level. Measures at the article level can be re-aggregated in terms of units of evaluation. At the venue level, the I3 creates a properly weighted alternative to the journal impact factor. I3 has the added advantage of enabling and quantifying classifications such as the six percentile rank classes used by the National Science Board's Science & Engineering Indicators.Comment: Research Evaluation (in press
    • …
    corecore