7 research outputs found

    The Accuracy of Confidence Intervals for Field Normalised Indicators

    Get PDF
    This is an accepted manuscript of an article published by Elsevier in Journal of Informetrics on 07/04/2017, available online: https://doi.org/10.1016/j.joi.2017.03.004 The accepted version of the publication may differ from the final published version.When comparing the average citation impact of research groups, universities and countries, field normalisation reduces the influence of discipline and time. Confidence intervals for these indicators can help with attempts to infer whether differences between sets of publications are due to chance factors. Although both bootstrapping and formulae have been proposed for these, their accuracy is unknown. In response, this article uses simulated data to systematically compare the accuracy of confidence limits in the simplest possible case, a single field and year. The results suggest that the MNLCS (Mean Normalised Log-transformed Citation Score) confidence interval formula is conservative for large groups but almost always safe, whereas bootstrap MNLCS confidence intervals tend to be accurate but can be unsafe for smaller world or group sample sizes. In contrast, bootstrap MNCS (Mean Normalised Citation Score) confidence intervals can be very unsafe, although their accuracy increases with sample sizes

    Confidence intervals for normalised citation counts: Can they delimit underlying research capability?

    Get PDF
    This is an accepted manuscript of an article published by Elsevier in Journal of Informetrics on 24/10/2017, available online: https://doi.org/10.1016/j.joi.2017.09.002 The accepted version of the publication may differ from the final published version.Normalised citation counts are routinely used to assess the average impact of research groups or nations. There is controversy over whether confidence intervals for them are theoretically valid or practically useful. In response, this article introduces the concept of a group’s underlying research capability to produce impactful research. It then investigates whether confidence intervals could delimit the underlying capability of a group in practice. From 123120 confidence interval comparisons for the average citation impact of the national outputs of ten countries within 36 individual large monodisciplinary journals, moderately fewer than 95% of subsequent indicator values fall within 95% confidence intervals from prior years, with the percentage declining over time. This is consistent with confidence intervals effectively delimiting the research capability of a group, although it does not prove that this is the cause of the results. The results are unaffected by whether internationally collaborative articles are included

    The research production of nations and departments: A statistical model for the share of publications

    Get PDF
    Policy makers and managers sometimes assess the share of research produced by a group (country, department, institution). This takes the form of the percentage of publications in a journal, field or broad area that has been published by the group. This quantity is affected by essentially random influences that obscure underlying changes over time and differences between groups. A model of research production is needed to help identify whether differences between two shares indicate underlying differences. This article introduces a simple production model for indicators that report the share of the world’s output in a journal or subject category, assuming that every new article has the same probability to be authored by a given group. With this assumption, confidence limits can be calculated for the underlying production capability (i.e., probability to publish). The results of a time series analysis of national contributions to 36 large monodisciplinary journals 1996-2016 are broadly consistent with this hypothesis. Follow up tests of countries and institutions in 26 Scopus subject categories support the conclusions but highlight the importance of ensuring consistent subject category coverage

    Statistical Significance and Effect Sizes of Differences among Research Universities at the Level of Nations and Worldwide based on the Leiden Rankings

    Get PDF
    The Leiden Rankings can be used for grouping research universities by considering universities which are not significantly different as a homogeneous set. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called “excellence indicator” PPtop-10%—the proportion of the top-10% most-highly-cited papers assigned to a university—we pursue a classification using (i) overlapping stability intervals, (ii) statistical-significance tests, and (iii) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, the resulting classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w <.2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. For practical purposes, our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities
    corecore