167 research outputs found

    The comparison of classification-system-based normalization procedures with source normalization alternatives in Waltman and Van Eck (2013)

    Get PDF
    Waltman & Van Eck, in press, contains a systematic large-scale empirical comparison of classification-system-based versus source normalization procedures. A source-normalization procedure SNCS performs better than a normalization procedure based on the system where publications are classified into fields according to the journal subject categories in the Web of Science bibliographic database. Using the same data and the same methods, in this note we confront SNCS with the best possible procedure among those based on three available algorithmic classification systems. Our conclusions raise some doubts concerning the idea that source normalization procedures are ready to supplant their classification-system-based alternatives

    A review of the literature on citation impact indicators

    Full text link
    Citation impact indicators nowadays play an important role in research evaluation, and consequently these indicators have received a lot of attention in the bibliometric and scientometric literature. This paper provides an in-depth review of the literature on citation impact indicators. First, an overview is given of the literature on bibliographic databases that can be used to calculate citation impact indicators (Web of Science, Scopus, and Google Scholar). Next, selected topics in the literature on citation impact indicators are reviewed in detail. The first topic is the selection of publications and citations to be included in the calculation of citation impact indicators. The second topic is the normalization of citation impact indicators, in particular normalization for field differences. Counting methods for dealing with co-authored publications are the third topic, and citation impact indicators for journals are the last topic. The paper concludes by offering some recommendations for future research

    A comparison of the Web of Science with publication-level classification systems of Science

    Get PDF
    In this paper we propose a new criterion for choosing between a pair of classification systems of science that assign publications (or journals) to a set of clusters. Consider the standard target (citedside) normalization procedure in which cluster mean citations are used as normalization factors. We recommend system A over system B whenever the standard normalization procedure based on system A performs better than the standard normalization procedure based on system B. Performance is assessed in terms of two double tests &-one graphical, and one numerical&- that use both classification systems for evaluation purposes. In addition, a pair of classification systems is compared using a third, independent classification system for evaluation purposes. We illustrate this strategy by comparing a Web of Science journal-level classification system, consisting of 236 journal subject categories, with two publication-level algorithmically constructed classification systems consisting of 1,363 and 5,119 clusters. There are two main findings. Firstly, the second publication-level system is found to dominate the first. Secondly, the publication-level system at the highest granularity level and the Web of Science journal-level system are found to be non-comparable. Nevertheless, we find reasons to recommend the publication-level option.This research project builds on earlier work started by Antonio Perianes- Rodriguez during a research visit to the Centre for Science and Technology Studies (CWTS) of Leiden University as awardee of José Castillejo grant, CAS15/00178, funded by the Spanish MEC. Ruiz- Castillo is a visiting researcher at CWTS and gratefully acknowledges CWTS for the use of its data. Ruiz-Castillo acknowledges financial support from the Spanish MEC through grant ECO2014-55953- P, as well as grant MDM 2014-0431 to his Departamento de Economía

    The comparison of normalization procedures based on different classification systems

    Get PDF
    In this paper, we develop a new methodology for comparing normalization procedures based on different classification systems. Firstly, a pair of normalization procedures should be compared using their own classification systems for evaluation purposes. Secondly, when the two procedures are noncomparable according to the above test, then evaluation using a third (or more) classification systems may be forthcoming. In the empirical part of the paper we use: (i) the IDCP method for the evaluation of normalization procedures; (ii) two nested classification systems consisting of 219 sub-fields and 19 fields, together with a systematic and a random assignment of articles to sub-fields (or fields) with the aim of maximizing or minimizing differences across sub-fields (or fields); (iii) six normalization procedures using mean citations in each of the classification systems as normalization factors, and (iv) a large dataset, indexed by Thomson Reuters, in which 4.4 million articles published in 1998-2003 with a five-year citation window are assigned to Web of Science subject-categories, or sub-fields using a fractional approach. The results obtained indicate that this methodology may lead to useful conclusions in specific instances.The authors acknowledge financial support by Santander Universities Global Division of Banco Santander. Ruiz-Castillo also acknowledges financial help from the Spanish MEC through grant ECO2010-1959

    The impact of classification systems in the evaluation of the research performance of the Leiden Ranking Universities

    Get PDF
    In this paper, we investigate the consequences of choosing different classification systems – namely, the way publications (or journals) are assigned to scientific fields– for the ranking of research units. We study the impact of this choice on the ranking of 500 universities in the 2013 edition of the Leiden Ranking in two cases. Firstly, we compare a Web of Science journal-level classification system, consisting of 236 subject categories, and a publication-level algorithmically constructed system, denoted G8, consisting of 5,119 clusters. The result is that the consequences of the move from the WoS to the G8 system using the Top 1% citation impact indicator are much greater than the consequences of this move using the Top 10% indicator. In the second place, we compare the G8 classification system and a publication-level alternative of the same family, the G6 system, consisting of 1,363 clusters. The result is that, although less important than in the previous case, the consequences of the move from the G6 to the G8 system under the Top 1% indicator are still of a large order of magnitude.This research project builds on earlier work started by Antonio Perianes-Rodriguez during a research visit to the Centre for Science and Technology Studies (CWTS) of Leiden University as awardee of JosĂ© Castillejo grant, CAS15/00178, funded by the Spanish MEC. Ruiz-Castillo is a visiting researcher at CWTS and gratefully acknowledges CWTS for the use of its data. Ruiz-Castillo acknowledges financial support from the Spanish MEC through grant ECO2014-55953-P, as well as grant MDM 2014-0431 to his Departamento de EconomĂ­a

    A Review of Theory and Practice in Scientometrics

    Get PDF
    Scientometrics is the study of the quantitative aspects of the process of science as a communication system. It is centrally, but not only, concerned with the analysis of citations in the academic literature. In recent years it has come to play a major role in the measurement and evaluation of research performance. In this review we consider: the historical development of scientometrics, sources of citation data, citation metrics and the “laws" of scientometrics, normalisation, journal impact factors and other journal metrics, visualising and mapping science, evaluation and policy, and future developments

    Methods for the generation of normalized citation impact scores in bibliometrics: Which method best reflects the judgements of experts?

    Full text link
    Evaluative bibliometrics compares the citation impact of researchers, research groups and institutions with each other across time scales and disciplines. Both factors - discipline and period - have an influence on the citation count which is independent of the quality of the publication. Normalizing the citation impact of papers for these two factors started in the mid-1980s. Since then, a range of different methods have been presented for producing normalized citation impact scores. The current study uses a data set of over 50,000 records to test which of the methods so far presented correlate better with the assessment of papers by peers. The peer assessments come from F1000Prime - a post-publication peer review system of the biomedical literature. Of the normalized indicators, the current study involves not only cited-side indicators, such as the mean normalized citation score, but also citing-side indicators. As the results show, the correlations of the indicators with the peer assessments all turn out to be very similar. Since F1000 focuses on biomedicine, it is important that the results of this study are validated by other studies based on datasets from other disciplines or (ideally) based on multi-disciplinary datasets.Comment: Accepted for publication in the Journal of Informetric

    The utilization of paper-level classification system on the evaluation of journal impact

    Full text link
    CAS Journal Ranking, a ranking system of journals based on the bibliometric indicator of citation impact, has been widely used in meso and macro-scale research evaluation in China since its first release in 2004. The ranking's coverage is journals which contained in the Clarivate's Journal Citation Reports (JCR). This paper will mainly introduce the upgraded version of the 2019 CAS journal ranking. Aiming at limitations around the indicator and classification system utilized in earlier editions, also the problem of journals' interdisciplinarity or multidisciplinarity, we will discuss the improvements in the 2019 upgraded version of CAS journal ranking (1) the CWTS paper-level classification system, a more fine-grained system, has been utilized, (2) a new indicator, Field Normalized Citation Success Index (FNCSI), which ia robust against not only extremely highly cited publications, but also the wrongly assigned document type, has been used, and (3) the calculation of the indicator is from a paper-level. In addition, this paper will present a small part of ranking results and an interpretation of the robustness of the new FNCSI indicator. By exploring more sophisticated methods and indicators, like the CWTS paper-level classification system and the new FNCSI indicator, CAS Journal Ranking will continue its original purpose for responsible research evaluation

    An alternative to field-normalization in the aggregation of heterogeneous scientific fields

    Get PDF
    A possible solution to the problem of aggregating heterogeneous fields in the all-sciences case relies on the normalization of the raw citations received by all publications. In this paper, we study an alternative solution that does not require any citation normalization. Provided one uses sizeand scale-independent indicators, the citation impact of any research unit can be calculated as the average (weighted by the publication output) of the citation impact that the unit achieves in all fields. The two alternatives are confronted when the research output of the 500 universities in the 2013 edition of the CWTS Leiden Ranking is evaluated using two citation impact indicators with very different properties. We use a large Web of Science dataset consisting of 3.6 million articles published in the 2005-2008 period, and a classification system distinguishing between 5,119 clusters. The main two findings are as follows. Firstly, differences in production and citation practices between the 3,332 clusters with more than 250 publications account for 22.5% of the overall citation inequality. After the standard field-normalization procedure where cluster mean citations are used as normalization factors, this figure is reduced to 4.3%. Secondly, the differences between the university rankings according to the two solutions for the all-sciences aggregation problem are of a small order of magnitude for both citation impact indicators.Ruiz-Castillo acknowledges financial support from the Spanish MEC through grant ECO2011-29762

    University citation distributions

    Get PDF
    We investigate the citation distributions of the 500 universities in the 2013 edition of the Leiden Ranking produced by The Centre for Science and Technological Studies. We use a Web of Science data set consisting of 3.6 million articles published in 2003 to 2008 and classified into 5,119 clusters. The main findings are the following. First, the universality claim, according to which all university-citation distributions, appropriately normalized, follow a single functional form, is not supported by the data. Second, the 500 university citation distributions are all highly skewed and very similar. Broadly speaking, university citation distributions appear to behave as if they differ by a relatively constant scale factor over a large, intermediate part of their support. Third, citation-impact differences between universities account for 3.85% of overall citation inequality. This percentage is greatly reduced when university citation distributions are normalized using their mean normalized citation scores (MNCSs) as normalization factors. Finally, regarding practical consequences, we only need a single explanatory model for the type of high skewness characterizing all university citation distributions, and the similarity of university citation distributions goes a long way in explaining the similarity of the university rankings obtained with the MNCS and the Top 10% indicator
    • 

    corecore