14 research outputs found

    Problems with SNIP

    Get PDF

    Remaining problems with the "New Crown Indicator" (MNCS) of the CWTS

    Full text link
    In their article, entitled "Towards a new crown indicator: some theoretical considerations," Waltman et al. (2010; at arXiv:1003.2167) show that the "old crown indicator" of CWTS in Leiden was mathematically inconsistent and that one should move to the normalization as applied in the "new crown indicator." Although we now agree about the statistical normalization, the "new crown indicator" inherits the scientometric problems of the "old" one in treating subject categories of journals as a standard for normalizing differences in citation behavior among fields of science. We further note that the "mean" is not a proper statistics for measuring differences among skewed distributions. Without changing the acronym of "MNCS," one could define the "Median Normalized Citation Score." This would relate the new crown indicator directly to the percentile approach that is, for example, used in the Science and Engineering Indicators of US National Science Board (2010). The median is by definition equal to the 50th percentile. The indicator can thus easily be extended with the 1% (= 99th percentile) most highly-cited papers (Bornmann et al., in press). The seeming disadvantage of having to use non-parametric statistics is more than compensated by possible gains in the precision

    Caveats for the journal and field normalizations in the CWTS ("Leiden") evaluations of research performance

    Full text link
    The Center for Science and Technology Studies at Leiden University advocates the use of specific normalizations for assessing research performance with reference to a world average. The Journal Citation Score (JCS) and Field Citation Score (FCS) are averaged for the research group or individual researcher under study, and then these values are used as denominators of the (mean) Citations per publication (CPP). Thus, this normalization is based on dividing two averages. This procedure only generates a legitimate indicator in the case of underlying normal distributions. Given the skewed distributions under study, one should average the observed versus expected values which are to be divided first for each publication. We show the effects of the Leiden normalization for a recent evaluation where we happened to have access to the underlying data

    A Review of Theory and Practice in Scientometrics

    Get PDF
    Scientometrics is the study of the quantitative aspects of the process of science as a communication system. It is centrally, but not only, concerned with the analysis of citations in the academic literature. In recent years it has come to play a major role in the measurement and evaluation of research performance. In this review we consider: the historical development of scientometrics, sources of citation data, citation metrics and the “laws" of scientometrics, normalisation, journal impact factors and other journal metrics, visualising and mapping science, evaluation and policy, and future developments

    Identifying Research Fields within Business and Management: A Journal Cross-Citation Analysis

    Get PDF
    A discipline such as business and management (B&M) is very broad and has many fields within it, ranging from fairly scientific ones such as management science or economics to softer ones such as information systems. There are at least three reasons why it is important to identify these sub-fields accurately. Firstly, to give insight into the structure of the subject area and identify perhaps unrecognised commonalities; second for the purpose of normalizing citation data as it is well known that citation rates vary significantly between different disciplines. And thirdly, because journal rankings and lists tend to split their classifications into different subjects – for example, the Association of Business Schools (ABS) list, which is a standard in the UK, has 22 different fields. Unfortunately, at the moment these are created in an ad hoc manner with no underlying rigour. The purpose of this paper is to identify possible sub-fields in B&M rigorously based on actual citation patterns. We have examined 450 journals in B&M which are included in the ISI Web of Science (WoS) and analysed the cross-citation rates between them enabling us to generate sets of coherent and consistent sub-fields that minimise the extent to which journals appear in several categories. Implications and limitations of the analysis are discussed

    The success-index: an alternative approach to the h-index for evaluating an individual's research output

    Get PDF
    Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481-485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning— equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index's limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index's ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index's. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc

    Throwing Out the Baby with the Bathwater: The Undesirable Effects of National Research Assessment Exercises on Research

    Get PDF
    The evaluation of the quality of research at a national level has become increasingly common. The UK has been at the forefront of this trend having undertaken many assessments since 1986, the latest being the “Research Excellence Framework” in 2014. The argument of this paper is that, whatever the intended results in terms of evaluating and improving research, there have been many, presumably unintended, results that are highly undesirable for research and the university community more generally. We situate our analysis using Bourdieu’s theory of cultural reproduction and then focus on the peculiarities of the 2008 RAE and the 2014 REF the rules of which allowed for, and indeed encouraged, significant game-playing on the part of striving universities. We conclude with practical recommendations to maintain the general intention of research assessment without the undesirable side-effects
    corecore